Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. by: Mead Naji Fri, 28 Nov 2025 17:29:07 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
  2. by: Mead Naji Fri, 28 Nov 2025 17:26:12 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
  3. by: Mead Naji Fri, 28 Nov 2025 17:24:15 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
  4. by: Mead Naji Fri, 28 Nov 2025 17:21:46 +0530 This lesson is for paying subscribers only This post is for paying subscribers only Subscribe now Already have an account? Sign in
  5. by: Mead Naji Fri, 28 Nov 2025 16:44:47 +0530 Learn Kubernetes the way it should be learned, with real understanding, not just commands. This course takes you from absolute basics to the internal working of a Kubernetes cluster, step by step, with hands-on demos and practical explanations. No copy-paste YAML tutorials. You get real Kubernetes knowledge that sticks. 🧑‍🎓 Who is this course for?This course is ideal for any beginner: Developers moving into DevOps or CloudSysadmins transitioning to containerized infrastructureEngineering students and freshers learning KubernetesAnyone who wants a strong Kubernetes foundation as a beginner✅No prior Kubernetes experience is required, but basic Linux command-line knowledge will help.🧩 What you’ll learn in this course?The course is divided into three modules: Module 1: Kubernetes Basics & First WorkloadsIn this module, you’ll build your foundation in Kubernetes and start running workloads on your own system. It covers: Introduction to KubernetesWhat is Kubernetes & Why We Need ItSetting Up Kubernetes on Your Local MachineWorking with PodsAt the end of the module, you’ll be able to run Kubernetes locally, understand its purpose, and deploy your first pods with confidence. Module 2: Core Kubernetes ConceptsThis is where Kubernetes stops being “magic” and starts making sense as you learn how Kubernetes actually manages applications. It covers: Deep dive into Pod creation & interactionLabels and selectorsDeployments and workload managementNamespaces and configuration basicsMulti-container pod patternsAt the end of the module, you’ll understand how Kubernetes organizes, scales, and manages real-world applications inside a cluster. Module 3: Kubernetes Infrastructure & InternalsMost courses stop at commands. This one goes deeper. You learn about networking, storage, and what happens behind the scenes. This module covers: Networking in KubernetesStorage & persistent volumesRecap with practical demosFrom kubectl command to cluster execution💡You can run the clusters locally with Minikube if you want. Which makes it ideal for students who don't want to spend on a cloud-based cluster. This menthod has been covered in the course. By the end, you won’t just use Kubernetes, you’ll understand how your commands flow through the system and become running containers.
  6. by: Akhilesh Mishra Fri, 28 Nov 2025 16:11:34 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  7. by: Akhilesh Mishra Fri, 28 Nov 2025 16:09:45 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  8. by: Akhilesh Mishra Fri, 28 Nov 2025 16:08:33 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  9. by: Akhilesh Mishra Fri, 28 Nov 2025 16:06:27 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  10. by: Akhilesh Mishra Fri, 28 Nov 2025 16:03:51 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  11. by: Akhilesh Mishra Fri, 28 Nov 2025 16:02:00 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  12. by: Akhilesh Mishra Fri, 28 Nov 2025 16:00:43 +0530 This lesson is for subscribers only This post is for subscribers only Subscribe now Already have an account? Sign in
  13. by: Akhilesh Mishra Fri, 28 Nov 2025 15:42:59 +0530 Think of Terraform as a construction manager. Resources are the buildings you construct. Data sources are the surveys you conduct before building. Dependencies are the order in which construction must happen. You can’t build the roof before the walls, right? Resources: The Heart of EverythingIf Terraform were a programming language, resources would be the objects. They’re things you create, modify, and delete. Every piece of infrastructure — servers, databases, networks, load balancers—starts as a resource in your code. The anatomy of a resource: Two parts matter most. The type tells Terraform what kind of thing to create. The name is how you refer to it in your code. That’s it. resource "aws_instance" "web" { ami = "ami-12345678" instance_type = "t2.micro" } Here’s what beginners often miss: the name web isn’t the name your server gets in AWS. It’s just a label for your Terraform code. Think of it like a variable name in programming. The actual AWS resource might be named something completely different (usually via tags). Arguments vs Attributes - the key distinction: You provide arguments (the input values). Terraform gives you attributes (the output values). You tell Terraform instance_type = "t2.micro". Terraform tells you back id = "i-1234567890abcdef0" and public_ip = "54.123.45.67" after creation. This distinction is crucial because attributes only exist after Terraform creates the resource. You can’t reference an instance’s IP address before it exists. Terraform figures out the order automatically. References connect everything: When you write aws_instance.web.id, you’re doing three things: Referencing the resource type (aws_instance)Referencing your local name for it (web)Accessing an attribute it exposes (id)This is how infrastructure connects. One resource references another’s attributes. VPC ID goes into subnet configuration. Subnet ID goes into instance configuration. These references tell Terraform the construction order. Why the two-part naming? Because you might create multiple instances of the same type. You could have aws_instance.web, aws_instance.db, and aws_instance.cache. The type describes what it is. The name describes which one. Data Sources: Reading the Existing WorldResources create. Data sources read. That’s the fundamental difference. Real infrastructure doesn’t exist in a vacuum. You’re deploying into an existing VPC someone else created. You need the latest Ubuntu AMI that changes monthly. You’re reading a secret from a vault. None of these things should you create — you just need to reference them. Data sources are queries: Think of them as SELECT statements in SQL. You’re querying existing infrastructure and pulling information into your Terraform code. data "aws_ami" "ubuntu" { most_recent = true owners = ["099720109477"] filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-*"] } } This doesn’t create an AMI. It searches for one that already exists and gives you its ID. Why data sources matter for infrastructure code: Imagine hardcoding AMI IDs. Next month, there’s a new Ubuntu release with security patches. You have to find the new AMI ID and update your code. Or, use a data source that always finds the latest. Code stays the same, infrastructure stays updated. The same principle applies to everything external: VPCs, DNS zones, availability zones, TLS certificates, secrets. If it exists before your Terraform code runs, use a data source. The reference difference: Resources are type.name.attribute. Data sources are data.type.name.attribute. That extra data. prefix tells Terraform and you that this is a read operation, not a create operation. Data sources run first: Before Terraform creates anything, it runs all data source queries. This makes sense—you need to read information before you can use it to create things. String Interpolation: Building Dynamic InfrastructureInfrastructure can’t be static. You need bucket names that include environment names. Server names that include region. Tags that reference other resources. String interpolation is how you build these dynamic values. The rule is simple: Use ${} when building strings. Don’t use it for direct references. bucket = "myapp-${var.environment}-data" # String building - USE ${} ami = data.aws_ami.ubuntu.id # Direct reference - NO ${} Why the distinction? In Terraform’s early days (before version 0.12), you needed "${var.name}" everywhere. It was verbose and ugly. Modern Terraform is cleaner — interpolation only when actually building strings. What you can put inside interpolation: Everything. Variables, resource attributes, conditional expressions, function calls. If it produces a value, you can interpolate it. name = "${var.project}-${var.environment}-${count.index + 1}" Common beginner mistake: Writing instance_type = "${var.instance_type}". The ${} is unnecessary here — you’re not building a string, just referencing a variable. Just write instance_type = var.instance_type. When interpolation shines: Multi-part names. Constructing URLs. Building complex strings from multiple sources. Any time “I need to combine these values into text.” Dependencies: The Hidden GraphThis is where Terraform’s magic happens. You write resources in any order. Terraform figures out the correct creation order automatically. How? By analyzing dependencies. Implicit Dependencies: The Automatic KindWhen you reference one resource’s attribute in another resource, you’ve created a dependency. Terraform sees the reference and knows the order. Mental model: Think of dependencies as arrows in a diagram. VPC -> Subnet -> Instance. Each arrow means “must exist before.” Terraform builds this diagram automatically by finding all the attribute references in your code. resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "app" { vpc_id = aws_vpc.main.id # Reference creates dependency cidr_block = "10.0.1.0/24" } resource "aws_instance" "web" { subnet_id = aws_subnet.app.id # Another dependency ami = "ami-12345678" instance_type = "t2.micro" } You can write these in any order in your files. Terraform sees aws_vpc.main.id referenced in the subnet, and aws_subnet.app.id referenced in the instance. It builds the dependency graph: VPC -> Subnet -> Instance. Why this matters: Terraform creates things in parallel when possible. If you define 10 S3 buckets with no dependencies, Terraform creates all 10 simultaneously. If you define a VPC with 10 subnets, it creates the VPC first, then all 10 subnets in parallel. The key insight: Every attribute reference is a dependency. resource.name.attribute means “I need this resource to exist first.” Explicit Dependencies: The Manual KindSometimes Terraform can’t detect dependencies automatically. The relationship exists, but there’s no attribute reference to signal it. Classic example - IAM: You create an IAM role. You attach a policy to it. You launch an instance with that role. The instance references the role, but not the policy. Terraform might launch the instance before the policy attaches, causing errors. resource "aws_instance" "app" { ami = "ami-12345678" instance_type = "t2.micro" depends_on = [aws_iam_role_policy.app_policy] } The depends_on argument says “don’t create this until that other thing exists,” even though we’re not referencing any of its attributes. When you need explicit dependencies: Timing matters but there’s no direct attribute reference Resources must exist in a certain order for external reasons You’re working around provider bugs or limitations Use sparingly: Explicit dependencies reduce parallelism. Terraform must wait for the dependency before proceeding. Only use them when implicit dependencies won’t work. The Dependency GraphBehind the scenes, Terraform builds a directed acyclic graph (DAG) of all your resources. Nodes are resources. Edges are dependencies. This graph determines everything: What to create firstWhat can be created in parallelWhat to destroy first when tearing downDirected: Dependencies have direction. A depends on B, not the other way around. Acyclic: No loops allowed. If A depends on B, B can’t depend on A (even indirectly). Terraform will error on circular dependencies—they’re impossible to resolve. Why you should care: Understanding the dependency graph helps you debug. If Terraform is creating things in a weird order, check the references. If it’s failing on circular dependencies, look for cycles in your attribute references. Viewing the graph: Run terraform graph to see the actual graph Terraform built. It’s mostly useful for debugging complex configurations. How It All Fits TogetherEvery Terraform confguration is a combination of these concepts: Resources define what to createData sources query what existsInterpolation builds dynamic valuesDependencies determine the orderThe workflow: Data sources run first (they’re just queries). Terraform analyzes all resource definitions and builds the dependency graph. It creates resources in the correct order, parallelizing when possible. References between resources become the glue. The mental shift: You’re not writing a script that executes top-to-bottom. You’re describing desired state. Terraform figures out how to achieve it. That’s declarative infrastructure. Why beginners struggle: They think procedurally. “First create this, then create that.” Terraform doesn’t work that way. You declare everything you want. Terraform analyzes the dependencies and figures out the procedure. Common Mistakes and How to Avoid ThemMistake 1: Using resource names as identifiers - Resource names in Terraform are local to your code. They’re not the names resources get in your cloud provider. Use tags or name attributes for that. Mistake 2: Trying to reference attributes before resources exist - You can’t use aws_instance.web.public_ip in a variable default value. The instance doesn’t exist when Terraform evaluates variables. Use locals or outputs instead. Mistake 3: Over-using explicit dependencies - If you’re writing lots of depends_on, you’re probably doing something wrong. Most dependencies should be implicit through attribute references. Mistake 4: Confusing data sources with resources - Data sources don’t create anything. If you need to create something, use a resource, not a data source. Mistake 5: Hardcoding values that data sources should provide - Don’t hardcode AMI IDs, availability zones, or other values that change. Use data sources to query them dynamically. Quick ReferenceResources: resource "type" "name" { argument = "value" } # Reference: type.name.attribute Data Sources: data "type" "name" { filter = "value" } # Reference: data.type.name.attribute String Interpolation: "prefix-${var.name}-suffix" # Building strings var.name # Direct reference Dependencies: # Implicit (automatic) subnet_id = aws_subnet.main.id # Explicit (manual) depends_on = [aws_iam_role.app] Master these four concepts and you’ll understand 80% of Terraform. Everything else builds on this foundation. You now understand the core building blocks: resources, data sources, and dependencies. But what if you need to create multiple similar resources? Copy pasting code isn’t the answer. In the next chapter, we’ll explore count, for_each, and conditionals—the tools that make your infrastructure code truly dynamic and scalable.
  14. by: Akhilesh Mishra Fri, 28 Nov 2025 15:39:09 +0530 How does Terraform remember what it created? How does it connect to AWS or Azure? Two concepts answer these questions: State (Terraform’s memory) and Providers (Terraform’s translators). Without state and providers, Terraform would be useless. Let’s understand them. What is Terraform State?State is Terraform’s memory. After terraform apply, it stores what it created in terraform.tfstate. Run this example: resource "local_file" "example" { content = "Hello from Terraform!" filename = "example.txt" } After terraform apply, check your folder – you’ll see example.txt and terraform.tfstate. Expected Files after applyState answers three questions: What exists? – Resources Terraform createdWhat changed? – Differences from your current configWhat to do? – Create, update, or delete?Change the content and run terraform plan. Terraform compares the state with your new config and shows exactly what will change. That’s the power of state. Local vs Remote StateLocal state works for solo projects. But teams need remote state stored in shared locations (S3, Azure Storage, Terraform Cloud). Remote state with S3:terraform { backend "s3" { bucket = "my-terraform-state" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" # Enables locking } } State locking prevents disasters when multiple people run Terraform simultaneously. Person A locks the state, Person B waits. Simple, but crucial for teams. Backend ConfigurationBackends tell Terraform where to store state. Local backend uses files on your computer. Remote backends use cloud storage. Local backend (default): # No configuration needed - stores terraform.tfstate locally S3 backend (AWS): terraform { backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" encrypt = true dynamodb_table = "terraform-locks" } } Azure backend: terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "prod.terraform.tfstate" } } GCS backend (Google Cloud): terraform { backend "gcs" { bucket = "my-terraform-state" prefix = "prod" } } Terraform Cloud: terraform { backend "remote" { organization = "my-org" workspaces { name = "production" } } } Backend InitializationAfter adding backend config, initialize: terraform init Terraform downloads backend provider and configures it. If state already exists locally, Terraform asks to migrate it to remote backend. Migration example: Initializing the backend... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state. Enter a value: yes Type yes and Terraform migrates your state. Partial Backend ConfigurationDon’t hardcode sensitive values. Use partial configuration: backend.tf: terraform { backend "s3" { # Dynamic values provided at init time } } backend-config.hcl: bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" Initialize with config: terraform init -backend-config=backend-config.hcl Or via CLI: terraform init \ -backend-config="bucket=my-terraform-state" \ -backend-config="key=prod/terraform.tfstate" \ -backend-config="region=us-west-2" Use case: Different backends per environment without changing code. Changing BackendsSwitching backends? Change config and re-run init: terraform init -migrate-state Terraform detects backend change and migrates state automatically. Reconfigure without migration: terraform init -reconfigure Starts fresh, doesn’t migrate existing state. Backend Best PracticesFor S3: - Enable bucket versioning (rollback bad changes) - Enable encryption at rest - Use DynamoDB for state locking - Restrict bucket access with IAM For teams: - Always use remote backends - Never use local backends in production - One state file per environment - Use separate AWS accounts for different environments Example S3 setup: # Create S3 bucket aws s3api create-bucket \ --bucket my-terraform-state \ --region us-west-2 # Enable versioning aws s3api put-bucket-versioning \ --bucket my-terraform-state \ --versioning-configuration Status=Enabled # Create DynamoDB table for locking aws dynamodb create-table \ --table-name terraform-locks \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --billing-mode PAY_PER_REQUEST What Are Providers?Providers are translators. They connect Terraform to services like AWS, Azure, Google Cloud, and 1,000+ others. Basic AWS provider: provider "aws" { region = "us-west-2" } resource "aws_s3_bucket" "my_bucket" { bucket = "my-unique-bucket-12345" # Must be globally unique } Authentication: Use AWS CLI (aws configure) or environment variables. Never hardcode credentials in your code. Provider Requirements and VersionsAlways specify provider versions to prevent surprises: terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # 5.x but not 6.0 } } } provider "aws" { region = "us-west-2" } resource "random_string" "suffix" { length = 6 special = false upper = false } resource "aws_s3_bucket" "example" { bucket = "my-bucket-${random_string.suffix.result}" } Version operators: = (exact), >= (minimum), ~> (pessimistic constraint). Provider Aliases: Multiple RegionsNeed the same provider with different configurations? Use aliases: provider "aws" { region = "us-west-2" } provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "west" { bucket = "west-bucket-12345" } resource "aws_s3_bucket" "east" { provider = aws.east bucket = "east-bucket-12345" } This creates buckets in two different regions. Perfect for multi-region deployments or backups. State Best PracticesMust do: - Add .tfstate to .gitignore (state files contain secrets) - Use remote state with encryption for teams - Enable state locking to prevent conflicts - Enable versioning on state storage (S3, etc.) Never do: - Manually edit state files - Commit state to git - Ignore state locking errors - Delete state without backups Essential State CommandsView state: terraform state list # List all resources terraform state show aws_s3_bucket.example # Show resource details Modify state: terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <resource> <id> # Import existing resource Example - Renaming a resource: # Change resource name in code, then: terraform state mv aws_s3_bucket.old aws_s3_bucket.new terraform plan # Should show "No changes" Advanced State ManagementBeyond basic commands, here’s what you need for real-world scenarios: Pulling and Pushing StatePull state to local file: terraform state pull > backup.tfstate Creates a backup. Useful before risky operations. Push state from local file: terraform state push backup.tfstate Restore state from backup. Use with extreme caution. Moving Resources Between ModulesRefactoring code? Move resources without recreating them: # Moving to a module terraform state mv aws_instance.web module.servers.aws_instance.web # Moving from a module terraform state mv module.servers.aws_instance.web aws_instance.web Removing Resources Without DestroyingRemove from state but keep the actual resource: terraform state rm aws_s3_bucket.keep_this Use case: You created a resource with Terraform but now want to manage it manually. Remove it from state, and Terraform forgets about it. Importing Existing ResourcesSomeone created resources manually? Import them into Terraform: # Import an existing S3 bucket terraform import aws_s3_bucket.imported my-existing-bucket # Import an EC2 instance terraform import aws_instance.imported i-1234567890abcdef0 Steps: Write the resource block in your code (without attributes)Run import command with resource address and actual IDRun terraform plan to see what attributes are missingUpdate your code to match the actual resourceRun terraform plan again until it shows no changesState Locking DetailsWhen someone is running Terraform, the state is locked. If a lock gets stuck: # Force unlock (dangerous!) terraform force-unlock <lock-id> Only use this if you’re absolutely sure no one else is running Terraform. Replacing ProvidersMigrating from one provider registry to another: terraform state replace-provider registry.terraform.io/hashicorp/aws \ registry.example.com/hashicorp/aws Useful when moving to private registries. State Inspection TricksShow specific resource: terraform state show aws_instance.web Shows all attributes of a single resource. Filter state list: terraform state list | grep "aws_instance" Find all EC2 instances in your state. Count resources: terraform state list | wc -l How many resources does Terraform manage? When Things Go WrongState out of sync with reality? terraform refresh # Or newer approach: terraform apply -refresh-only Corrupted state? Check your state backups (S3 versioning saves you here)Restore from backup using terraform state pushAlways test in a non-prod environment firstConflicting states in team? Enable state locking (DynamoDB with S3) Use remote state, never local for teams - Implement CI/CD that runs Terraform centrally Quick ReferenceBackends: # S3 terraform { backend "s3" { bucket = "my-state-bucket" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" } } # Azure terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "terraform.tfstate" } } terraform init # Initialize backend terraform init -backend-config=file.hcl # Partial config terraform init -migrate-state # Migrate to new backend Providers: # Single provider provider "aws" { region = "us-west-2" } # With version constraint terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } # Multiple regions with aliases provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "east_bucket" { provider = aws.east bucket = "my-bucket" } Common Commands: terraform state list # List resources terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <res> <id> # Import existing resource You now understand how Terraform remembers (state) and connects (providers). These two concepts are fundamental to everything else you’ll do with Terraform. State and providers handle the “how” and “where” of Terraform. Now let’s explore the “what”—the actual infrastructure you create. In the next chapter, we’ll dive deep into resources, data sources, and the dependency system that makes Terraform intelligent about the order of operations.
  15. by: Akhilesh Mishra Fri, 28 Nov 2025 15:35:51 +0530 Basic Variable TypesTerraform has three basic types: string, number, and bool. variable "name" { type = string description = "User name" default = "World" } variable "counts" { type = number default = 5 } variable "enabled" { type = bool default = true } Use them: resource "local_file" "example" { content = "Hello, ${var.name}! Count: ${var.counts}, Enabled: ${var.enabled}" filename = "output.txt" } 🚧You cannot use reserved words like count as variable name.Change values: terraform apply -var="name=Alice" -var="counts=10" Apply VariableAlways add description. Future you will thank you. Advanced Variable TypesReal infrastructure needs complex data structures. ListsOrdered collections of values: variable "availability_zones" { type = list(string) default = ["us-west-2a", "us-west-2b", "us-west-2c"] } Access elements: locals { first_az = var.availability_zones[0] # "us-west-2a" all_zones = join(", ", var.availability_zones) } Use in resources: resource "aws_subnet" "public" { count = length(var.availability_zones) availability_zone = var.availability_zones[count.index] # ... other config } MapsKey-value pairs: variable "instance_types" { type = map(string) default = { dev = "t2.micro" prod = "t2.large" } } Access values: resource "aws_instance" "app" { instance_type = var.instance_types["prod"] # Or with lookup function instance_type = lookup(var.instance_types, var.environment, "t2.micro") } ObjectsStructured data with different types: variable "database_config" { type = object({ instance_class = string allocated_storage = number multi_az = bool backup_retention = number }) default = { instance_class = "db.t3.micro" allocated_storage = 20 multi_az = false backup_retention = 7 } } Use in resources: resource "aws_db_instance" "main" { instance_class = var.database_config.instance_class allocated_storage = var.database_config.allocated_storage multi_az = var.database_config.multi_az backup_retention_period = var.database_config.backup_retention } List of ObjectsThe power combo - multiple structured items: variable "servers" { type = map(object({ size = string disk = number })) default = { web-1 = { size = "t2.micro", disk = 20 } web-2 = { size = "t2.small", disk = 30 } } } resource "aws_instance" "servers" { for_each = var.servers instance_type = each.value.size tags = { Name = each.key } root_block_device { volume_size = each.value.disk } } Sets and TuplesSet - Like list but unordered and unique: variable "allowed_ips" { type = set(string) default = ["10.0.0.1", "10.0.0.2"] } Tuple - Fixed-length list with specific types: variable "server_config" { type = tuple([string, number, bool]) default = ["t2.micro", 20, true] } Rarely used. Stick with lists and maps for most cases. Variable ValidationAdd rules to validate input: variable "environment" { type = string description = "Environment name" validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "instance_count" { type = number default = 1 validation { condition = var.instance_count >= 1 && var.instance_count <= 10 error_message = "Instance count must be between 1 and 10." } } Catches errors before Terraform runs. Validation CheckSensitive VariablesMark secrets as sensitive: variable "db_password" { type = string sensitive = true } Won’t appear in logs or plan output. Still stored in state though (encrypt your state!). Variable PrecedenceMultiple ways to set variables. Terraform picks in this order (highest to lowest): Command line: -var="key=value"*.auto.tfvars files (alphabetical order)terraform.tfvars fileEnvironment variables: TF_VAR_nameDefault value in variable blockSetting Variables with FilesCreate terraform.tfvars: environment = "prod" instance_type = "t2.large" database_config = { instance_class = "db.t3.large" allocated_storage = 100 multi_az = true backup_retention = 30 } Run terraform apply - picks up values automatically Or environment-specific files: # dev.tfvars environment = "dev" instance_type = "t2.micro" terraform apply -var-file="dev.tfvars" Locals: Computed ValuesVariables are inputs. Locals are calculated values you use internally. variable "project_name" { type = string default = "myapp" } variable "environment" { type = string default = "dev" } locals { resource_prefix = "${var.project_name}-${var.environment}" common_tags = { Project = var.project_name Environment = var.environment ManagedBy = "Terraform" } is_production = var.environment == "prod" backup_count = local.is_production ? 3 : 1 } resource "aws_s3_bucket" "data" { bucket = "${local.resource_prefix}-data" tags = local.common_tags } Use var. for variables, local. for locals. OutputsDisplay values after apply: output "bucket_name" { description = "Name of the S3 bucket" value = aws_s3_bucket.data.id } output "is_production" { value = local.is_production } output "db_endpoint" { value = aws_db_instance.main.endpoint sensitive = true # Don't show in logs } View outputs: terraform output terraform output bucket_name Real-World Examplevariable "environment" { type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Must be dev, staging, or prod." } } variable "app_config" { type = object({ instance_type = string min_size = number }) } locals { common_tags = { Environment = var.environment ManagedBy = "Terraform" } # Override for production min_size = var.environment == "prod" ? 3 : var.app_config.min_size } resource "aws_autoscaling_group" "app" { name = "myapp-${var.environment}-asg" min_size = local.min_size desired_capacity = local.min_size tags = [ for key, value in local.common_tags : { key = key value = value propagate_at_launch = true } ] } Quick ReferenceBasic types:variable "name" { type = string } variable "count" { type = number } variable "enabled" { type = bool } Complex types:variable "zones" { type = list(string) } variable "types" { type = map(string) } variable "config" { type = object({ name = string, size = number }) } variable "servers" { type = map(object({ size = string, disk = number })) } Validation:validation { condition = contains(["dev", "prod"], var.env) error_message = "Must be dev or prod." } Locals and Outputs:locals { name = "${var.project}-${var.env}" } output "result" { value = aws_instance.app.id, sensitive = true } Variables make your code flexible. Complex types model real infrastructure. Locals keep things DRY. Outputs share information. With variables and locals in your toolkit, you now know how to make your Terraform code flexible and maintainable. But where does Terraform store the information about what it created? And how does it connect to AWS, Azure, or other cloud providers? That’s what we’ll explore next with state management and providers.
  16. by: Akhilesh Mishra Fri, 28 Nov 2025 15:34:33 +0530 Step 1: Install TerraformFor macOS users: brew install terraform For Windows users: Download from the official Terraform website and add it to your PATH. For Linux users: wget https://releases.hashicorp.com/terraform/1.12.0/terraform_1.12.0_linux_amd64.zip unzip terraform_1.12.0_linux_amd64.zip sudo mv terraform /usr/local/bin/ Step 2: Verify Installationterraform version You should see something like: Terraform v1.12.0 Step 3: Create Your First Terraform FileCreate a new directory for your first Terraform project: mkdir my-first-terraform cd my-first-terraform Create a file called main.tf and add this simple configuration: # This is a comment in Terraform resource "local_file" "hello" { content = "Hello, Terraform World!" filename = "hello.txt" } This simple example creates a text file on your local machine. Not very exciting, but it’s a great way to see Terraform in action without needing cloud credentials. Step 4: The Magic CommandsNow comes the fun part! Run these commands in order: Initialize Terraform: terraform init Terraform InitThis downloads the providers (plugins) needed for your configuration. See what Terraform plans to do: terraform plan Terraform PlanThis shows you exactly what changes Terraform will make. Apply the changes: terraform apply Terraform ApplyType yes when prompted, and watch Terraform create your file! Clean up: terraform destroy Terraform DestroyThis removes everything Terraform created. What Just Happened?Congratulations! You just used Terraform to manage infrastructure (even if it was just a simple file). Here’s what each command did: terraform init: Set up the working directory and downloaded necessary pluginsterraform plan: Showed you what changes would be madeterraform apply: Actually made the changesterraform destroy: Cleaned everything upThis same pattern works whether you’re creating a simple file or managing thousands of cloud resources. Essential Terraform CommandsBeyond the basic workflow, here are commands you’ll use daily: terraform validate - Check if your configuration is syntactically valid: terraform validate Terraform ValidateRun this before plan. Catches typos and syntax errors instantly. terraform fmt - Format your code to follow standard style: terraform fmt Terraform FormatMakes your code consistent and readable. Run it before committing. terraform show - Inspect the current state: terraform show Terraform ShowShows you what Terraform has created. terraform output - Display output values: terraform output Terraform OutputUseful for getting information like IP addresses or resource IDs. terraform console - Interactive console for testing expressions: terraform console Test functions and interpolations before using them in code. Type exit to quit. terraform refresh - Update state to match real infrastructure: terraform refresh 📋Deprecated in favor of terraform apply -refresh-only, but worth knowing.Common Command PatternsSee plan without applying: terraform plan -out=tfplan See PlanApply saved plan: terraform apply tfplan Auto-approve (careful!): terraform apply -auto-approve Destroy specific resource: terraform destroy -target=aws_instance.example Format all files recursively: terraform fmt -recursive These commands form your daily Terraform workflow. You’ll use init, validate, fmt, plan, and apply constantly. Terraform Daily CommandsNow that you understand what Terraform is and how to use its basic commands, let’s dive deeper into the core concepts that make Terraform powerful. We’ll start with variables and locals—the building blocks that make your infrastructure code flexible and reusable. I have also built Living DevOps platform as a real-world DevOps education platform. I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time. You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production. Living With DevOps
  17. by: Akhilesh Mishra Fri, 28 Nov 2025 15:33:10 +0530 If you go back two decades, everyone used those physical servers (produced by IBM, HP, and Cisco), which took weeks to setup correctly before we could run the applications on them. Then came the time of virtualization. Sharing computing resources across multiple OS installations using hypervisor-based virtualization technologies such as VMware became the new normal. It reduced the time to spin up a server to run your application but also increased complexity. Subsequently, we got AWS, which revolutionized computing, and a new era of cloud computing became streamlined. After AWS, other big tech companies such as Microsoft and Google launched their cloud offerings named Azure and Google Cloud Platform, respectively. In the cloud, you can spin up a server in a few minutes with just a few clicks. Creating and managing a few servers was very easy, but as the number of servers and their configurations grew, manual tracking became a significant challenge. That’s where Infrastructure as Code (IaC) and Terraform came to the rescue, and trust me, once you understand what they can do, you’ll wonder how you ever lived without them. What is Infrastructure as Code?Infrastructure as Code is exactly what it sounds like – managing and provisioning your infrastructure (servers, networks, databases, etc.) through code instead of manual processes. Instead of clicking through web consoles or running manual commands, you write code that describes what you want your infrastructure to look like. The Problems IaC SolvesManual configuration chaos and deployment failures “It works on my machine” syndromeScaling nightmares across multiple environmentsLost documentation and tribal knowledgeSlow disaster recoveryThen came Terraform, and it changed the gameSo what is Terraform? Terraform is an open-source Infrastructure as Code tool developed by HashiCorp that makes managing infrastructure as simple as writing a shopping list. Here’s what makes Terraform special: 1. It’s Written in GoTerraform is built in Golang, which gives it superpowers for creating infrastructure in parallel. While other tools are still thinking about what to do, Terraform is already building your servers, networks, and databases simultaneously. 2. Uses HCL (HashiCorp Configuration Language)Terraform uses HCL, which is designed to be human-readable and easy to understand. Don’t worry if you haven’t heard of HCL – it’s so intuitive that you’ll be writing infrastructure code in no time. Here’s a simple example of what Terraform code looks like: resource "aws_instance" "web_server" { ami = "ami-12345678" instance_type = "t2.micro" tags = { Name = "My Web Server" Environment = "Production" } } See how readable that is? We’re creating an AWS instance (a virtual server) called “web_server” with specific settings. Even if you’ve never seen Terraform code before, you can probably guess what this does. 3. Cloud-Agnostic MagicHere’s where Terraform really shines – it works with ANY cloud provider. AWS, Azure, Google Cloud, DigitalOcean, even on-premises systems. You learn Terraform once, and you can manage infrastructure anywhere. 4. State ManagementTerraform keeps track of what it has created in something called a “state file.” This means it knows exactly what exists and what needs to be changed, created, or destroyed. It’s like having a super-smart assistant who remembers everything. Why Terraform Became the King of IaCYou might be wondering: “Why should I learn Terraform when there are other tools like AWS CloudFormation or Azure Resource Manager?” Great question! Here’s why Terraform has become the go-to choice for infrastructure management: 1. One Tool to Rule Them AllMost cloud providers have their own IaC tools (AWS CloudFormation, Azure ARM templates, etc.), but they only work with their specific cloud. Terraform works with over 1,000 providers, from major cloud platforms to niche services. Learn it once, use it everywhere. 2. Huge Community and EcosystemTerraform has a massive community creating and sharing modules (think of them as infrastructure blueprints). Need to set up a web application with a database? There’s probably a module for that. Want to configure monitoring? There’s a module for that too. 3. Declarative ApproachWith Terraform, you describe what you want (the end state), not how to get there. You say “I want a web server with these specifications,” and Terraform figures out all the steps needed to make it happen. 4. Plan Before You ApplyOne of Terraform’s best features is the ability to see exactly what changes will be made before applying them. It’s like having a crystal ball that shows you the future of your infrastructure. Real-World Example: Why You Need ThisLet me paint you a picture of why this matters. Imagine you’re working at a company that needs to: Deploy a web application across development, staging, and production environments.Ensure all environments are identicalScale up during peak timesQuickly recover from disastersMaintain security and compliance standardsWithout Terraform: You’d spend weeks manually setting up each environment, documenting every step, praying nothing breaks, and probably making small mistakes that cause mysterious issues months later. With Terraform: You write the infrastructure code once, test it in development, then deploy identical environments to staging and production with a single command. Need to scale up? Change a number in your code and redeploy. Disaster recovery? Run the same code in a different region.
  18. by: Akhilesh Mishra Fri, 28 Nov 2025 15:31:09 +0530 Stop clicking around cloud dashboards. Start building reproducible, version-controlled, scalable infrastructure using Terraform, the industry standard for Infrastructure as Code. This course takes you from first terraform init to real-world Terraform architectures with modules, best practices, and production workflows. 👉 Designed for Linux users, DevOps engineers, cloud learners, and sysadmins transitioning to modern IaC. Most Terraform tutorials either stay too basic or jump straight into complex setups without building strong foundations. This course does both. You don’t just learn commands. You understand the logic and design decisions behind Terraform infrastructure. 🧑‍🎓 Who is this course for?This course is built for people who want real skills, not just certificates: Linux users who want to move into cloud & DevOpsSystem administrators shifting towards Infrastructure as CodeAspiring DevOps engineers building their toolchainDevelopers tired of manual server configurationAnyone who wants to treat infrastructure like code (the right way)🕺No prior Terraform experience required, but basic Linux command-line knowledge will help. 🧩 What you’ll learn in this course?Chapter 1: Infrastructure as Code – Here We Go Understand what IaC really means, why Terraform matters and how it fits into modern infrastructure. Chapter 2: Getting Started – Your First Steps Install Terraform, your first configuration, understanding providers, init, plan, and apply. Chapter 3: Terraform Variables and Locals Learn how to write reusable and parameterized configurations using variables and locals. Chapter 4: Terraform State and Providers Dive deep into state files, provider configuration, remote state, and dangers of bad state handling. Chapter 5: Resources, Data Sources, and Dependencies Understand how Terraform actually builds infrastructure graphs and manages dependencies. Chapter 6: Count, For_Each, and Conditionals Dynamic infrastructure with loops, conditional logic, and scalable configuration patterns. Chapter 7: Dynamic Blocks in Terraform Create flexible and advanced configurations using dynamic blocks. Chapter 8: Terraform Modules – Building Blocks You Can Reuse Everywhere Learn how to design, use, and structure modules like real production setups. Chapter 9: Provisioners and Import Handle legacy infrastructure, migration strategies, provisioners, and importing existing resources. Chapter 10: Terraform Functions – Your Code’s Swiss Army Knife Use built-in functions to manipulate data, strings, numbers, and collections. Chapter 11: Workspaces, Null Resources, and Lifecycle Rules Advanced control: multi-environment setups, resource lifecycle management, and more. Chapter 12: Terraform Best Practices and Standards The chapter that converts you from a Terraform user to a Terraform practitioner. Folder structure, naming, workflows, and professional practices. I built Living DevOps platform as a real-world DevOps education platform. I’ve spent years building, breaking, and fixing systems in production. Now I teach what I’ve learned in my free time. You’ll find resources, roadmaps, blogs, and courses around real-world DevOps. No fluff. No theory-only content. Just practical stuff that actually works in production. Living With DevOps
  19. by: Sourav Rudra Fri, 28 Nov 2025 09:50:08 GMT Pebble, the e-paper smartwatch that first launched on Kickstarter in 2012, gained a cult-like following for its innovative approach to wearable tech. Sadly, Fitbit acquired and shut it down in 2016, taking with it the intellectual property (IP) of the brand. The IP eventually landed with Google after their Fitbit acquisition in 2021. Earlier this year, the original creator, Eric Migicovsky, relaunched Pebble through Core Devices LLC, a self-funded company operating via the rePebble consumer brand. This resurrection became possible after Google open-sourced PebbleOS in January 2025. Now, Core Devices has announced something significant for the Pebble community. Great News for Pebble EnthusiastsA screenshot from the demo in this YouTube videoThe complete Pebble software stack is now open source. Everything you need to operate a Pebble watch is now available on GitHub. All of this didn't just materialize overnight; Core Devices has been improving PebbleOS since its open-sourcing and has been pushing those to the public repository. The rebuilt mobile companion apps for Android and iOS just got released as open source too. Without these apps, a Pebble watch is basically a paperweight. These are built on libpebble3, a Kotlin multiplatform library for interacting with Pebble devices. Similarly, the developer tools have been completely overhauled, with the old Ubuntu VirtualBox VM-based workflow being replaced with a modern browser-based one that allows anyone to develop Pebble apps in a web browser. The Pebble Time 2 is very close to coming to market! Hardware schematics are public as well. The complete electrical and mechanical design files for the Pebble 2 Duo are now available with KiCad project files included. You could literally build your own Pebble-compatible device from these files. There are some non-free components still in the mix. The heart rate sensor library for the Pebble Time 2, Memfault crash reporting, and Wispr Flow speech recognition all use proprietary code. But, fret not, these are all optional. You can compile and run the core Pebble software without touching any of them. Core Devices also launched two major software systems alongside the open source releases. The Pebble mobile app now supports multiple app store feeds that anyone can create and operate. This works similar to Linux package managers such as APT or AUR. Here, users can subscribe to different feeds and browse apps from multiple sources instead of relying on a single centralized server. Core Devices already operates its own feed at appstore-api.repebble.com. This feed backs up to the Internet Archive, preserving community-created watchfaces and apps that have been around over the years. Plus, developers can upload new or existing apps through the new Developer Dashboard. Monetization remains possible through services like KiezelPay, so creators can still get paid for their hard work. Why Open Source Everything?Migicovsky learned some painful lessons from Pebble's first shutdown. When Fitbit killed the project in 2016, the community was left scrambling with limited options. The gap between 95% and 100% open source turned out to matter more than anyone expected. Android users couldn't easily get the companion app. Many iOS users faced the same problem. "This made it very hard for the Pebble community to make improvements to their watches after the company behind Pebble shut down," Eric explained in his blog post. The reasoning behind this open source push is straightforward. If Core Devices disappears tomorrow, the community has everything they need to keep their watches running. No dependencies, no single point of failure. Apart from that, these new Pebble devices will focus on reparability, with the upcoming Pebble Time 2 (expected March-April 2026) featuring a screwed-in back cover, allowing users to replace the battery themselves instead of needing to buy a new device when the battery gives out. 💬 What are your thoughts on Pebble's comeback? I certainly look forward to new launches by them!
  20. by: Theena Kumaragurunathan Fri, 28 Nov 2025 08:29:16 GMT In a previous column, I argued that self-hosting is resistance in an age where ownership is increasingly illusory. There is increasing evidence that self-hosting is becoming popular among a certain kind of user, say the typical readership of ItsFoss. There is a simple explanation for this shift: people want their data, dollars, and destiny back. Centralized platforms optimized for engagement and extraction are colliding with real-world needs — privacy, compliance, predictability, and craft. Linux, containers, and a flood of polished open-source apps have turned what used to be an enthusiast’s project into a practical step for tech‑savvy users and teams. The demand and supply of self-hosting is headed in the right direction. The Economics of Self-HostingPhoto by Chris Briggs / UnsplashI spoke about the demand side of the equation in a preivous column. Today, I would like to talk about the supply side. Put simply, self-hosting got easier: Dockerized services, one‑click bundles, and opinionated orchestration kits now cover mail, identity, storage, media, automation, and analytics. And the hardware needed is trivial: a mini‑PC, a NAS, or a Pi can host most personal stacks comfortably. Click and deploy OS and interfaces make it so easyAn increasing portion of these users are also conscious of the environmental impact of unchecked consumerism: recycling older hardware for your home-lab is an easy way to ensure that you aren't contributing to mountainous e-waste that pose risks to communities and the environment. The numbers reinforce the vibe. The 2025 selfh.st community survey (~4081 respondents) shows more than four in five self‑hosters run Linux, and Docker is the dominant runtime by a wide margin. While this hasn't become mainstream yet, it highlights one of my arguments: there are costs to trusting big tech with your most important data and services, financial and otherwise. Once such costs outweigh the costs of self-hosting, once the vast majority of users can no longer deny such costs are draining their wallets and their sense of agency, we can expect this shift to become mainstream. Self-Hosting is Independence from Big TechPhoto by Jonathan Borba / UnsplashWhen your calendar, contacts, photo library, and documents sit on your own box behind your own reverse proxy, you remove third‑party analytics, shadow data enrichment, and surprise policy drift. You also reduce the surface area for “account lockouts” that nuke access to life‑critical records. For users burned by sudden platform changes — forced accounts, feature removals, data portability barriers—self ‑ hosting is an antidote.Cost predictability over time. Cloud convenience is real, but variable charges accumulate as you scale storage, bandwidth, and API calls. With self‑hosting, you pay upfront (hardware + power), then amortize. For steady, continuous workloads—backups, photo libraries, media servers, home automation, docs, password vaults—the math is often favorable.Reliability through ownership. Services die. Companies pivot. APIs change. By running key utilities yourself — RSS, password vaults, photo libraries, file sync, smart‑home control — you guarantee continuity and can script migrations on your timeline. That resilience matters when consumer vendors sunset features or shove core capabilities behind accounts and subscriptions.Curiosity and capability‑building. There’s a practical joy in assembling a stack and knowing how each layer works that I can attest. For Linux users, self‑hosting is an ideal next step: you practice containerization, networking, monitoring, backups, and threat modeling in a low‑risk environment.The Linux‑first baselinePhoto by Hc Digital / UnsplashLinux dominates self‑hosting because it’s stable, well‑documented, and unfussy (in the context of servers; I am aware Linux desktop has some ways to go before mainstream users will flock towards Linux). Package managers and container runtimes are mature. Community tutorials cover everything from Traefik/Caddy reverse proxies to WireGuard tunnels and PostgreSQL hardening. The selfh.st survey shows Docker adoption near 90 percent, with Proxmox, Home Assistant OS, and Raspberry Pi OS widely used. It’s not gatekeeping; it’s pragmatism. Linux is simply the easiest way to stitch a small, reliable server together today. Where the rubber meets the roadMost start with a single box and a few services: identity and secrets (Vaultwarden, Authelia, Keycloak); files and backups (Nextcloud, Syncthing, Borgmatic); media (Jellyfin, Navidrome, Photoprism/Immich); home (Home Assistant); networking (Nginx/Traefik/Caddy, WireGuard); knowledge (FreshRSS, Paperless‑ngx, Ghost). The payoff is a system where each function is yours. AI is accelerating the trendSelf‑hosted AI moved from novelty to necessity for teams with sensitive workloads. Local inference avoids model‑provider data policies, reduces latency, and stabilizes costs. Smaller models now run on consumer hardware; hybrid patterns route easy requests locally and escalate only high‑uncertainty tasks to cloud. For regulated data, self‑hosting is often the only sane route. The economics are getting clearer“Is self‑hosting cheaper?” depends on workload shape and rigor. Cloud Total Cost of Ownership (TCO) includes convenience and externalized maintenance; self‑hosting TCO includes your time, updates, and electricity. But for persistent, predictable personal workloads—photo/video storage, backups, calendars, private media—self‑hosting tends to win. What self‑hosting doesn’t fixYou still need to operate. Patching, backups, monitoring, and basic security hygiene are on you. Automated update pipelines and off‑site backups reduce pain, but they require setup and discipline. ​Internet constraints exist. Residential ISPs throttle uploads or block SMTP; dynamic IPs complicate inbound routes; power outages happen. In practice, most personal stacks work fine with dynamic DNS, tunneling, and a small VPS for exposed services, but know your constraints. ​⁠Some services are better bought. Global‑scale delivery, high‑throughput public sites, and compliance‑heavy email sending can be more efficient with a trustworthy provider. “Self‑host everything” isn’t the point—“self‑host what’s sensible” is.The cultural angleSelf‑hosting isn’t anti‑cloud; it’s pro‑agency. It’s choosing the right locus of control for the things you care about. For FOSS communities, it’s consistent with the ethos: own your stack, contribute upstream, and refuse enshittification through slow, patient craft. For Linux users, it’s the obvious next rung: turn your knowledge into durable systems that serve people you love, not just platforms that serve themselves. If you value predictability, privacy, and the quiet confidence of owning the tools you rely on, self‑hosting stops being a hobby and starts being common sense. The shift is already underway. It’s not loud. It’s steady. And Linux is where it happens.
  21. by: Sourav Rudra Thu, 27 Nov 2025 17:00:46 GMT A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system. X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address. Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland. Now, KDE has announced it is sunsetting the Plasma X11 session entirely. What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it. Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027. Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7. The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development. What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions. Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032. The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration. Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead. There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift. Suggested Read 📖 U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  22. by: Sourav Rudra Thu, 27 Nov 2025 17:00:46 GMT A growing number of Linux desktop environments (DEs) are moving towards Wayland, the modern display protocol designed to replace the aging X11 window system. X11 has been the foundation of Linux graphical interfaces for over three decades now, but it carries significant technical debt and security limitations that Wayland aims to address. Projects like Fedora, GNOME, and KDE have been leading the charge on this by being among the first ones to adopt Wayland. Now, KDE has announced it is sunsetting the Plasma X11 session entirely. What's Happening: The KDE Plasma team has made it clear that the upcoming Plasma 6.8 release will be Wayland-exclusive and that the Plasma X11 session will not be included in it. Support for X11 applications will be handled entirely through Xwayland, a compatibility layer that allows X11 apps to run on Wayland compositors. The Plasma X11 session itself will continue to receive support until early 2027. Though, the developers have not provided a specific end date yet, as they are working on additional bug-fix releases for Plasma 6.7. The rationale behind this change is to allow the Plasma team to move faster on improving the stability and functionality of the DE. They stated that dropping X11 support will help them adapt without dragging forward legacy support that holds back development. What to Expect: For most users, this change is said to have minimal immediate impact. KDE says that the vast majority of their users are already using the Wayland session, and it has been the default on most distributions. Users who still require X11 can opt for long-term support distributions like AlmaLinux 9, for example, which includes the Plasma X11 session and will be supported until 2032. The developers also note that gaming performance has improved on Wayland. The session supports adaptive sync, optional tearing, and high-refresh-rate multi-monitor setups out of the box. HDR gaming works with some additional configuration. Plus, users of NVIDIA GPUs can breathe easy now, as Wayland support in the proprietary NVIDIA driver has matured significantly. Graphics cards supported by the manufacturer work well nowadays. For older NVIDIA hardware, the open source Nouveau driver can be used instead. There are some issues that the Plasma team is actively working on addressing, things like output mirroring, session restore, and remembering window positions. But overall, they seem well-prepared for this massive shift. Suggested Read 📖 U Turn! X11 is Back in GNOME 49, For NowA temporary move that gives people some breathing room.It's FOSSSourav Rudra
  23. by: Sourav Rudra Thu, 27 Nov 2025 14:13:02 GMT If you spend a lot of time on a computer, then fonts matter more than you think. A good one reduces eye strain and makes reading the contents of the screen easier. The right one can drastically improve your entire desktop experience. In my case, I like to use Inter on my Fedora-powered daily driver, and I don't really mess around with it. But everyone's different. Some like rounded fonts. Others want sharp, clean lines. Having options matters. Your eyes, your choice after all. Anyhow, Google just open-sourced a new option worth checking out. Google Sans Flex: What to Expect?Google Sans FlexReleased under the SIL Open Font License, Google Sans Flex is an open source font that is touted to be their next-gen brand typeface, designed by David Berlow. Sans Flex is a variable font with five axes: weight, width, optical size, slant, and rounded terminals. One file holds multiple styles instead of separate files, delivering different looks from a single download. Google designed it for screens of various sizes and modern operating systems. Plus, it should look sharp on high-resolution displays with fractional scaling. Basically, one Sans Flex file replaces dozens of individual font files. Just a demo of this font. I used GNOME Tweaks to apply it system-wide.Get Google Sans FlexYou can get the font file from the official website, and after that, you can install it on Ubuntu or any other Linux distribution with ease by following our handy guide. Keep in mind that the variable font features won't work in Linux desktop environments, and you will only get the regular style when using it system-wide. If you need help or have any questions, then you can ask the helpful folks over at our community forum. Google Sans FlexSuggested Read 📖: Learn to install fonts in Linux. How to Install New Fonts in Ubuntu and Other Linux DistrosWondering how to install additional fonts in Ubuntu Linux? Here is a screenshot tutorial to show you how to easily install new fonts.It's FOSSAbhishek Prakash
  24. by: Abhishek Prakash Thu, 27 Nov 2025 10:33:18 GMT As Linux users, most of us prefer open-source software. But if you’ve been using Linux for a while, you know this truth too: in daily workflows, you may have to rely on proprietary software. And sometimes, you use software that feels like open source projects but they actually are not. I am going to list some of those applications that are popular among Linux users but often we don't realize that they are not open source. I'll also suggest their open source alternatives for you. Obsidian: Personal knowledge baseObsidian has become incredibly popular among developers, researchers, and anyone who takes their notes seriously. Its local-first approach, Markdown support, and graph view make it ideal for building a personal knowledge base. While it supports community plugins and customization, the core application itself is proprietary. This may come as a surprise because it always feels like Obsidian is open source. Alas! It is not. 🐧The most suitable open source alternative to Obsidian is Logseq. You can also try Joplin for its simplicity.Termius: Modern SSH clientTermius is a sleek, cross-platform SSH client used by sysadmins and developers, specially the ones who manage multiple servers. It offers synchronization across devices, organized host management, and secure key handling. However, it’s a fully closed-source commercial product. How I wish it was open source. 🐧Tabby could be somewhat of an open source alternative here.MobaXterm: Accessing Linux servers from WindowsMobaXterm is primarily a Windows tool, but many Linux users interact with it while managing remote Linux servers from work or university environments. At least that's what I used around 12 years ago at work. It combines SSH, X11 forwarding, and remote desktop features under one roof. And it does the job very effectively and offers a lot more than PuTTY. 🐧Not sure if there is a single application that has same features as MobaXterm. Perhaps PuTTY and X2Go or Remmina could be used.Warp: The AI-powered terminalWarp is a new-age terminal focused on modern developer and devops workflows. It offers command blocks, AI suggestions and AI agents, team sharing features, and a highly polished interface. But it’s completely closed-source. I would have appreciated it if they offered it as open source and used their proprietary AI offering as optional add-on. 🐧I believe Wave is the most suitable open source alternative to Warp. Similar features and you can also use local AI.Docker Desktop: For easy container managementDocker itself is open source, but Docker Desktop is not. It provides a GUI, system integration, container management tools and additional features that simplify your container-based workflows on personal machines. After all, not everyone is a command line champion. Despite the licensing controversies, many people still use it because of convenience and integration with development environments. 🐧Rancher Desktop is worth looking at as an alternative here.Visual Studio Code: Microsoft's not so open offeringVS Code sits in a slightly grey area: The base project (Code – OSS) is open source.The official Microsoft build of VS Code is proprietary due to licensed components and telemetry.Nevertheless, it remains the most popular code editor for developers, including Linux users, thanks to its extensions, easy GitHub integration, and huge plugin ecosystem. 🐧Code - OSS is available in the official repositories of many Linux distributions. Think of it as Chromium browser which is open source version of Chrome.Discord: The developer community hubThere was a time when developers used to dwell in IRC servers. That was 20 years ago. These days, Discord seems to have taken over all other instant messaging services. Surprisingly, Discord started as a gaming platform but has become a central communication tool for tech communities, open source projects, and developer groups. Many open source project communities now live there, even though Discord itself is fully proprietary. 🐧Matrix-based Element can be an alternative here.Vivaldi: Chrome alternative browserVivaldi is a popular web browser among Linux users. It is based on open-source Chromium, but its UI, branding, and feature layer are proprietary. Its deep customization, built-in tools (notes, mail, calendar), and privacy-focused philosophy make it a suitable choice for many Linux users. Wondering why it is not open source? They have a detailed blog post about it. 🐧You may consider Brave web browser.VMWare Workstation: Enterprise-level virtualizationBut since it is 'enterprise' level stuff, how can it be open source? Despite all the licensing controversy, VMware’s Workstation and Fusion products are still heavily used for virtualization in both personal and enterprise environments. They’re well-optimized, reliable, and offer features that are sometimes ahead of open-source alternatives. But yes, they are completely proprietary. 🐧GNOME Boxes is my preferred way of managing virtual machines.Ukuu: Easy kernel management on UbuntuUkuu stands for Ubuntu Kernel Upgrade Utility. It allows you to install mainline Linux kernel on Ubuntu. You can also use it for installing a kernel of your choice, add, delete kernels from the comfort of GUI. A few years ago, Ukuu switched to a paid license, unfortunately. 🐧Mainline is an actively maintained open source fork of Ukuu.Plex: Media server for self-hosting enthusiastsPlex is extremely popular among Linux users who build homelabs and/or media servers. What started as a self-hosted media server, Plex gradually moved to become a streaming platform of its own. Oh! The irony. Not just that, most of its ecosystem is closed-source and cloud-dependent. Recently, they have started cracking down on free remote streaming of personal media. 🐧Forget Plex, go for Jellyfin. Emby and Kodi are also good open source media servers.Tailscale – Easy remote access for self-hostersTailscale uses the open-source WireGuard protocol but offers a proprietary product and service on top of it. It makes secure networking between your devices ridiculously easy. This is perfect for self-hosters, and homelabbers as you can securely access your self-hosted services from outside your home network. This simplicity is why several users accept the closed-source backend. 🐧You can go for Headscale as an alternative.Snap Store: Open front, closed backendUbuntu's Snap-based software center, Snap Store, is closed source software. Snapd, the package manager, is open source. But the Snap Store backend is proprietary and controlled by Canonical. This has sparked debate in the Linux community for years. Still, most Ubuntu users rely on it daily for installing and managing applications. It comes by default, after all. 🐧As an Ubuntu user, you can get the actual GNOME Software back.Steam: The backbone of Linux gamingSurprised? Yes, our beloved Steam client is not open source software. Yet we use it. None of us can deny that Steam has been crucial for improving the state of gaming on Linux. From Proton to native Linux support for thousands of games, Steam has played a huge role in improving Linux as a gaming platform, even though the platform itself is proprietary. 🐧If you must, you could try Lutris or Heroic Games Launcher.ConclusionUsing open-source software is about freedom, not necessarily forced purity. Many Linux users aim to replace proprietary software whenever possible but they also value productivity, reliability, and workflow efficiency. If a closed-source tool genuinely helps you work better today, well use them but keep on supporting open alternatives alongside. The good thing is that for almost every popular proprietary tool, the open-source ecosystem continues to offer strong alternatives. To me, the important thing isn’t whether your entire stack is open source. It’s that you’re aware of your choices and the trade-offs behind them. And that awareness is where true freedom begins.
  25. by: Abhishek Prakash Thu, 27 Nov 2025 04:41:37 GMT Happy Thanksgiving 🦃 I’m incredibly thankful for this community. To our Plus members who support us financially, and to our free members who amplify our work by sharing it with the world — you all mean a lot to us. Your belief in what we do has kept us going for 13 amazing years. This Thanksgiving, let’s also extend our gratitude beyond our personal circles to the open-source contributors whose work silently powers our servers, desktops, and daily digital lives. From code to distributions to documentation, their relentless effort keeps the Linux world alive 🙏 Here's the highlight of this edition of FOSS Weekly: Zorin OS upgrade tool.Arduino's future looking precarious.Dell prioritizing Linux with its recent launch.Backing up Flatpak and Snap applications.And other Linux news, tips, and, of course, memes!Thanksgiving is also associated with offers, deals and shopping. Like every year, I have curated a list of deals and offers that may interest you as a Linux user. See if there is something that you need (or want). Black Friday Deals for Linux Users 2025 [Continually Updated With New Entries]Save big on cloud storage, privacy tools, VPN services, courses, and Linux hardware.It's FOSSAbhishek PrakashThere is also a wholesome deal that will deliver fresh cranberry sauce to your doorstep while supporting keystone open source maintainers. 📰 Linux and Open Source NewsBlender 5.0 has arrived with major changes across the board.TUXEDO Computers has shelved its plans for an Arm notebook.Ultramarine 43 is here with a fresh Fedora 43 base and some major changes.Raspberry Pi Imager 2.0 has arrived with a clean redesign and new features.Dell has launched the Dell Pro Max 16 Plus, with the Linux version being available before Windows.Collabora has relaunched desktop office suite. Which is basically LibreOffice under the core but with a more modern and fresh user interface. Collabora Launches Desktop Office Suite for LinuxThe new office suite uses modern tech for a consistent online-offline experience; the existing offering is renamed ‘Classic’ and it maintains a traditional approach.It's FOSSSourav Rudra🧠 What We’re Thinking AboutArduino's enshittification might've begun as Qualcomm carries out some massive policy changes. Enshittification of Arduino Begins? Qualcomm Starts Clamping DownNew Terms of Service introduce perpetual content licenses, reverse-engineering bans, and widespread data collection.It's FOSSSourav Rudra🧮 Linux Tips, Tutorials, and LearningsYou can backup and restore your Flatpak and Snap apps and settings between distro hops. Backup and Restore Your Flatpak Apps & SettingsMake a backup of your Flatpak apps and application data and restore them to a new Linux system where Flatpak is supported.It's FOSSRoland TaylorMove Between the Distros: Back Up and Restore Your Snap PackagesMake a backup of your Snap apps and application data and restore them to a new Linux system where Snap is supported. Works between Ubuntu and non-Ubuntu distros, too.It's FOSSRoland TaylorThe Zorin OS developers have given early access to the upgrade path from Zorin OS 17 to 18. And check out this list of OG applications that were reborn as NG apps. Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor Linux runs the world’s servers, but on desktops, it’s still fighting for attention. That’s why It’s FOSS exists; to make Linux easier, friendlier, and more approachable for everyday users. We’re funded not by VCs, but by readers like you. This Thanksgiving, we’re grateful for your trust and your support. If you believe in our work, if we ever helped you, do consider upgrading to an It’s FOSS Plus membership — just $3/month or a single payment of $99 for lifetime access. Help us stay independent, stay human in the age of AI slop and more importantly. Join It's FOSS Plus 👷 AI, Homelab and Hardware CornerDon't neglect your homelab. Manage it effectively with these dashboard tools. 9 Dashboard Tools to Manage Your Homelab EffectivelySee which server is running what services with the help of a dashboard tool for your homelab.It's FOSSAbhishek Kumar🛍️ Linux eBook bundle This curated library (partner link) of courses includes Supercomputers for Linux SysAdmins, CompTIA Linux+ Certification Companion, Using and Administering Linux: Volumes 1–2, and more. Plus, your purchase supports the Room to Read initiative! Explore the Humble offer here✨ Project HighlightsA return from the dead? These open source apps sure did. Open Source Never Dies: 11 of My Favorite Linux Apps That Refused to Stay DeadThese Linux apps were popular once. And then they were abandoned. And then they came back with a new generation tag.It's FOSSRoland Taylor📽️ Videos I Am Creating for YouIn the latest video, I share how I customize and set up my Firefox browser. Subscribe to It's FOSS YouTube Channel💡 Quick Handy TipIn Nautilus file manager, you can select files according to certain pre-set conditions. To do that, first press CTRL+S and enter the relevant patterns you want to sort by. This will then make Nautilus select files or directories based on the given pattern. You can press CTRL+SHIFT+I to revert the selection as well. PS: The tip was tested using Nautilus, but other file managers should also have such functionality; only the shortcuts will vary. 🎋 Fun in the FOSSverseTest your skills by reviewing Fedora's interesting history in this quick quiz. The Fedora Side of Linux: QuizFedora has an interesting history. Take this quiz to find out a little more about it.It's FOSSAnkush Das🤣 Meme of the Week: Step aside mortals, your god is here. 🗓️ Tech Trivia: On November 24, 1998, America Online announced it would acquire Netscape Communications in a stock-for-stock deal valued at $4.2 billion, a move that signaled the shifting balance of power in the browser wars and highlighted the rapid consolidation occurring during the late-1990s Internet boom. 🧑‍🤝‍🧑 From the Community: Long-time FOSSer Ernest has posted an interesting thread on obscure Linux distributions. Obscure GNU/Linux Distributions that May Interest YouIn the ZDNET Tech Today newsletter that came into my inbox today, there’s an item that interested me, and I immediately thought about all my fellow !T’S FOSS’ers! You can read the item for yourself here, but one distribution in particular caught my attention, because it offers only open source software throughout, and it eschews systemd, and instead offers several other init systems users can choose from, including OpenRC, Runit, s6, and SysV (list copied directly from the article), which brough…It's FOSS Communityernie❤️ With lovePlease share it with your Linux-using friends and encourage them to subscribe (hint: it's here). Share the articles in Linux Subreddits and community forums. Follow us on Google News and stay updated in your News feed. Opt for It's FOSS Plus membership and support us 🙏 Enjoy FOSS 😄

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.