Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Blog Entries posted by Blogger

  1. by: Abhishek Prakash
    Sat, 29 Nov 2025 10:55:28 GMT

    The development for Ubuntu 26.04 codenamed 'Resolute Raccoon' has already begun. It is a long-term support (LTS) release and a particularly important one as we venture more into the Wayland-only era of Linux.
    Let's have a look at the release schedule of Ubuntu 26.04 and its planned features.
    đź“‹Since the development is in progress and the final version comes in April'26, I'll be updating this article from time to time when there are new developments.Ubuntu 26.04 Release Schedule
    Ubuntu 26.04 LTS is going to be released on 23rd April, 2026. Here's the release sechedule with important milestones.
    Date Event February 19 Feature Freeze March 12 User Interface Freeze March 19 Kernel Feature Freeze March 26 Beta Release April 9 Kernel Freeze April 16 Release Candidate April 23 Final Release Please note that release schedule may change as the development progresses. Although, the final release date should stay the same.
    đź’ˇFun fact: A new version of Ubuntu is always released on a Thursday. For the October releases (version number ending with XX.10), it is the second Thursday of the month. For the April release (version number ending with XX.04), it is the fourth Thursday of the month. Two extra weeks are there to compensate for the Christmas holidays.New features coming to Ubuntu 26.04 Resolute Raccoon
    Since it is very early stages of development, I will include some predictions as well, which means some of the listed features may change in the final release.
    GNOME 50
    For sure, Ubuntu 26.04 LTS will be rocking the latest GNOME at the time of its release. And that latest GNOME will be version 50.
    What does GNOME 50 offer? Well, that too is under development and the picture will be a lot more clear as we enter 2026.
    I will say be prepared to see some of your classic GNOME apps replaced by modern versions. We have seen this trend in the past where GNOME changed the default text editor, document viewer, terminal etc.
    New default video player
    Totem has been the default video player in Ubuntu for as long as I remember. Not that I can remember like an elephant, but I am not Leonard Shelby from Memento either.
    Showtime feels sleek and modern and fits quite well with the new GNOME design principles that is libadwaita.
    Interface is minimalist, but you still get some controls. You can click the gear symbol at bottom right or right click anywhere in the player for that.
    Showtime is only referred to as Video Player and the icon is similar to Totem (referred as Videos) in the screenshot below.
    Showtime is Video Player, Totem is Videos. MPV is well...MPVNew default system monitor
    GNOME 50 will also have a new default system monitor, Resources. This is surprising because Resources is not a GNOME Core app, although it's a GNOME Circle app which means a community made tool that meets the GNOME standards.
    Although the current system monitor is not that bad in my opinion.
    Current default system monitorx86-64-v3 and amd64v3 version for all packages
    Ubuntu 26.04 will have amd64v3/x86-64-v3 variants for all the packages, and they will be well tested, too. Some packages are already available in this format in the recently released Ubuntu 25.10, the LTS release will have all the packages in this variant.
    What is x86-64-v3? Well, you know what x86-64 and amd64 are, right? Yes, it is the 64-bit for Intel CPU and amd64 is the 64-bit AMD processor. And they have been in existence for nearly two decades now.
    But not all 64 bit processors are created equal. The newer generation of CPUs supports more instruction sets than their predecessors. And that's why they are labeled as v2/v3/v4 architecture variants.
    Basically, if you have a newer CPU, you can switch to the v3 variants of the packages and you should have some performance improvements.
    Don't worry. The v3 variant won't be default. Nothing to bother about if you are rocking an older machine.
    Introducing architecture variants: amd64v3 now available in Ubuntu 25.10Ubuntu prides itself on being among the most compatible Linux distributions. Compatibility is often a conscious trade-off against bleeding-edge performance. In Ubuntu 25.10, we have added support for packages that target specific silicon variants, meaning you can have your cake and eat it too! Back in 2023 I wrote an article talking about the history of the amd64/x86-64 architecture and described the “levels” x86-64-v2, -v3, and -v4 (often referred to as amd64v3, amd64v4, etc.). Since then, we’…Ubuntu Community HubmwhudsonDownload Ubuntu 26.04 (if you want to test it)
    đźš§This is a development release and not suitable for running on your main machine. Only download and install it if you want to help with testing. Use it in a virtual machine or on a spare system that has no data on it. You have been warned.The first monthly snapshop of Ubuntu 26.04 development release is now available for thos who want to test it. And if you do test it, timely report the bugs otherwise what's the point of testing?
    Download Ubuntu 26.04 SnapshotWhat do you want to see in Ubuntu 26.04 LTS?
    This is a long term support. Expectations are high. What are yours? What features do you want to see in this upcoming version? Please share your views in the comment section.
  2. by: Roland Taylor
    Sat, 29 Nov 2025 08:31:26 GMT

    The GNOME app ecosystem is on fire these days. Whatever your needs, there's probably an app for that. Or two. Or three (no kidding)! Two of the sleekest apps for monitoring your system (aptly called, "system monitors", of course) are Mission Center, and Resources.
    Both use libadwaita to provide slick visuals, responsive GUIs, and familiar functionality for the GNOME desktop environment. But, which one is right for you? I'll attempt to help you answer that question in this article.
    Quick Intro of Both Awesome System Monitors
    Now that you understand the premise of what we're about, let's get acquainted with both apps. You'll see where they're quite similar in some ways, yet distinct enough to each stand alone.
    Mission Center
    Mission Center 1.1.0 in GNOME 48Mission Center is a detail-oriented system monitor app for the GNOME desktop environment, written primarily in Rust, using GTK4 and libadwaita. Geared towards high efficiency and smooth displays, Mission Center has hardware accelerated graphs for complex CPU, memory, and GPU breakdowns.
    Resources
    Resources 1.9.1 in GNOME 48Resources is a relatively minimalist system monitor for the GNOME desktop environment. As a GNOME Circle app, it conforms strictly to the GNOME HIG and its patterns, with an emphasis on simplicity and reduced user effort. Resources is written in Rust and uses GTK4 and libadwaita for its GUI.
    Usage: The First Glance
    First impressions matter, and with any system monitor, what you see first tells you what's going on before you even click on anything else.
    So how do these two stack up? Let's see.
    Mission Center: Hardware First, Stats & Figures Upfront
    Mission Center drops you right into the hardware actionOn first launch, Mission Center surfaces your hardware resources right away: CPU, GPUs, memory, drives, and network, with detailed readouts right before your eyes. Combining clean, accessible visuals with thorough device info, Mission Center makes you feel you've hooked up your computer to an advanced scanner — where nothing is hidden from view.

    If you like to jump right into the stats and details, Mission Center is just for you.
    Resources: Apps & Hardware Side-by-side
    Resources puts your apps and hardware resources side by sideResources displays a combined overview of your apps and hardware resources at first glance. You can get a quick view of which apps are using the most resources, side by side with what hardware resources are most in use. You also get a graph for the system's battery (if present) in the sidebar (not shown here).
    It doesn't give you detailed hardware stats and readouts until you "ask" (by clicking on any individual component), but you can still see which resources are under strain at a glance and compare this with which apps are using the most resources.
    CPU Performance & Memory Usage
    A system monitor is no good if it hogs system resources for itself. They need to be lean and quick to help us wrangle with other applications that aren't. So where do our two contenders fall?
    đź’ˇNote: Plasma System Monitor was used for resource measurements. Different apps, including both Mission Center and Resources, measure resource usage differently.Mission Center: Stealthy on the CPU, kind to memory
    Mission Center uses around 160 MiB (168 MB), during casual usageMission Center barely sips the CPU, being negligible enough that it does not show up in your active processes (if you choose this filter) in GNOME System Monitor, even while displaying live details for a selected application.
    This is likely due to the fact that Mission Center uses GPU acceleration for graphs, thereby reducing strain on the CPU. It's also relatively light on memory usage, hitting roughly 168MB of usage even while showing detailed process info.
    Resources: Light on CPU, easier on memory use
    Resources hits roughly 130 MiB (136 MB) in typical usageKeeping well within its balanced, lightweight approach, Resources sips the CPU while also keeping memory usage low, at around 136MB. While its use of hardware acceleration could not be confirmed, it's worth noting that Resources keeps graphs visible and active, even when displaying process details. Still, it manages to keep resource usage to a minimum.
    Differences: Negligible
    As this is one of the few areas where the comparison veers beyond subjectivity, it's important to note that the difference here is not that significant. Both apps are light on resources, especially in the critical area of CPU usage.
    The difference in memory usage between the two isn't particularly significant, though for users with limited RAM to spare, Mission Center's slightly higher memory usage could be a consideration to keep in mind.
    Process Management & Control
    Mission Center (left, background) and Resources (Right, foreground) showing their app viewsPerhaps the most critical aspect of any system monitor, is not just how well they can show you information, but how much they actually let you do with the information you're given. That's where process management and control come in, so let's look at how these two compare.
    What both have in common
    As you might expect, each app gives you the typical "Halt/Stop", "Continue", "End", and "Kill" signal controls as standard fare for Linux process management. Both allow you to view details for an individual app or process.
    Of course, you also get the common, critical stats, like CPU, Memory, and GPU usage. However, there are distinct, notable differences that can help you decide which one you'd prefer.
    đź’ˇNote: Processes in Linux are not the same as "Apps". Apps can consist of multiple processes working in tandem.Mission Center: More details up front
    Viewing the details for Google Chrome in Mission CenterBoth apps and processes are displayed in the same tree view in Mission Center, just separated with a divider. It tries to put more info before you by default, including the Process ID (PID), though only for processes, Shared Memory, and Drive I/O. You can also combine parent and child process data, and show which CPU core any app is running on.
    Despite a detailed view, there's no control over process priority in Mission CenterWhile you get more signals for controlling your processes, like 'Interrupt' (INT), 'Hangup' (HUP), and 'Terminate' (TERM), you don't get the option to display or adjust the 'niceness' of any process, which, for those not in the know, tells the system what priority a process should have.
    Standout feature: Service management
    Mission Center lets you start, stop, and restart services with Systemd from a familiar GUIOne thing that sets Mission Center apart from other system monitors is its ability to display and control processes through Systemd. With Systemd being pretty much the standard across most distros, this is a feature that many power users will want in their toolkit, especially those who would prefer to avoid the CLI for such tasks as restarting services like Pipewire.
    Resources: Crouching data, hidden customization
    Resources showing app details for Nextcloud DesktopInterestingly, while Resources might appear to be the more conservative choice, it actually gives more options for what data you can display. As an example, Resources allows you to view GPU video encoder/decoder usage on a per-app basis. Another handy feature is the option to change a process' niceness value, though you must first enable this in the preferences.
    In Resources, apps and processes are displayed in separate views, which have some notable differences. For instance, there is no "User" column in the 'Apps' view, and you cannot change the priority of an app.
    Standout feature: Changing processor affinity
    Changing Processor Affinity in Resources is quick and simpleResources features a hidden gem in its process view, which is the ability to change process affinity on a per-process basis. This is especially handy for power users who want to make use of modern multi-core systems, where efficiency and performance cores often dwell in the same CPU.
    With a clever combination of niceness values (priority) and CPU affinity, advanced users can use Resources to pull maximum performance or power savings without having to jump into the terminal.
    Installation & Availability
    Mission Center: A package for everyone
    Mission Center is included by default with Aurora, Bazzite, Bluefin and DeLinuxCo. It's also available through an official Flatpak hosted on Flathub. The project provides AppImage downloads for both AMD64 and ARM64 architectures, and a Snap package in the Snap Store.
    Ubuntu users can install Mission Center with Snap by running:
    # Install Mission Center: sudo snap install mission-centerIf even these are not enough, you can also get Mission Center in many distributions directly from their repositories (though mileage may vary on the version that's actually available in such instances).
    The project provides a full list of repositories (with version numbers) in their Readme file.
    Resources: A conservative, but universal approach
    Being part of the GNOME Circle, Resources is assuredly packaged as a Flatpak and available via Flathub. These are official packages and provide the experience most likely to offer the best stability and newest available features.
    Unofficial packages are also available for Arch and Fedora.
    Arch users can install it with:
    pacman -S resourcesWhereas Fedora users can install it using dnf and Copr:
    dnf copr enable atim/resources dnf install resourcesFinal thoughts: Which one's for you?
    That's a question only you can answer, but hopefully you now have enough information to help you make an informed decision. With the diversity of apps arising in this season of mass Linux development and adoption, it's only a matter of time before you find (or create) your favourite.
    If you're looking for deep hardware monitoring up front and don't need heavy customization, Mission Center is more likely to be a good fit for you. However, if you're looking for a quick bird's eye-view of apps and hardware at a glance, with the option to dig deeper where needed, Resources is probably more your speed.
    Of course, you can install and try both apps if you'd like, that's part of the fun and freedom of Linux. Feel free to let us know what you think in the comments.
  3. by: Abhishek Prakash
    Fri, 28 Nov 2025 18:54:18 +0530

    Happy Thanksgiving.
    To celebrate the occassion, I am announcing a new course on that teaches you Infrastructure as Code with Terraform. This course is contributed by Akhilesh who is also the creator behind the Living DevOps platform.
    The Terraform course is free for all LHB members.
    Learn Infrastructure as Code with TerraformLearn Terraform from scratch with a Linux-first approach. Master Infrastructure as Code concepts, modules, state, best practices, and real-world workflows.Linux HandbookAkhilesh MishraThat's not it. We also have a Kubernetes course for beginners now. It is a blend of essential concept explanation with handy examples. This one is for Pro members only.
    Mastering Kubernetes as a BeginnerStop struggling with Kubernetes. Learn it properly with a structured and practical course crafted specially for beginners.Linux HandbookMead NajiWith these two, our catalog now has 16 courses. And we are not stopping here. We are working on more courses and series and videos. Stay tuned and enjoy the membership benefits đź’™
    By the way, we are also running a limited time Black Friday deal. You get $10 off on both yearly and lifetime membership (for $89 instead of $139) for the next 7 days. This is probably the last time you'll see such low prices as we plan a price increase in 2026 to survive the inflation.
    Get Lifetime Pro Membership  
     
      This post is for subscribers only
    Subscribe now Already have an account? Sign in
  4. by: Mead Naji
    Fri, 28 Nov 2025 16:44:47 +0530

    Learn Kubernetes the way it should be learned, with real understanding, not just commands.
    This course takes you from absolute basics to the internal working of a Kubernetes cluster, step by step, with hands-on demos and practical explanations.
    No copy-paste YAML tutorials. You get real Kubernetes knowledge that sticks.
    🧑‍🎓 Who is this course for?
    This course is ideal for any beginner:
    Developers moving into DevOps or Cloud Sysadmins transitioning to containerized infrastructure Engineering students and freshers learning Kubernetes Anyone who wants a strong Kubernetes foundation as a beginner ✅No prior Kubernetes experience is required, but basic Linux command-line knowledge will help.🧩 What you’ll learn in this course?
    The course is divided into three modules:
    Module 1: Kubernetes Basics & First Workloads
    In this module, you’ll build your foundation in Kubernetes and start running workloads on your own system. It covers:
    Introduction to Kubernetes What is Kubernetes & Why We Need It Setting Up Kubernetes on Your Local Machine Working with Pods At the end of the module, you’ll be able to run Kubernetes locally, understand its purpose, and deploy your first pods with confidence.
    Module 2: Core Kubernetes Concepts
    This is where Kubernetes stops being “magic” and starts making sense as you learn how Kubernetes actually manages applications. It covers:
    Deep dive into Pod creation & interaction Labels and selectors Deployments and workload management Namespaces and configuration basics Multi-container pod patterns At the end of the module, you’ll understand how Kubernetes organizes, scales, and manages real-world applications inside a cluster.
    Module 3: Kubernetes Infrastructure & Internals
    Most courses stop at commands. This one goes deeper. You learn about networking, storage, and what happens behind the scenes. This module covers:
    Networking in Kubernetes Storage & persistent volumes Recap with practical demos From kubectl command to cluster execution 💡You can run the clusters locally with Minikube if you want. Which makes it ideal for students who don't want to spend on a cloud-based cluster. This menthod has been covered in the course. By the end, you won’t just use Kubernetes, you’ll understand how your commands flow through the system and become running containers.
  5. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:42:59 +0530

    Think of Terraform as a construction manager. Resources are the buildings you construct. Data sources are the surveys you conduct before building. Dependencies are the order in which construction must happen. You can’t build the roof before the walls, right?
    Resources: The Heart of Everything
    If Terraform were a programming language, resources would be the objects. They’re things you create, modify, and delete. Every piece of infrastructure — servers, databases, networks, load balancers—starts as a resource in your code.
    The anatomy of a resource: Two parts matter most. The type tells Terraform what kind of thing to create. The name is how you refer to it in your code. That’s it.
    resource "aws_instance" "web" { ami = "ami-12345678" instance_type = "t2.micro" } Here’s what beginners often miss: the name web isn’t the name your server gets in AWS. It’s just a label for your Terraform code. Think of it like a variable name in programming. The actual AWS resource might be named something completely different (usually via tags).
    Arguments vs Attributes - the key distinction: You provide arguments (the input values). Terraform gives you attributes (the output values). You tell Terraform instance_type = "t2.micro". Terraform tells you back id = "i-1234567890abcdef0" and public_ip = "54.123.45.67" after creation.
    This distinction is crucial because attributes only exist after Terraform creates the resource. You can’t reference an instance’s IP address before it exists. Terraform figures out the order automatically.
    References connect everything: When you write aws_instance.web.id, you’re doing three things:
    Referencing the resource type (aws_instance) Referencing your local name for it (web) Accessing an attribute it exposes (id) This is how infrastructure connects. One resource references another’s attributes. VPC ID goes into subnet configuration. Subnet ID goes into instance configuration. These references tell Terraform the construction order.
    Why the two-part naming? Because you might create multiple instances of the same type. You could have aws_instance.web, aws_instance.db, and aws_instance.cache. The type describes what it is. The name describes which one.
    Data Sources: Reading the Existing World
    Resources create. Data sources read. That’s the fundamental difference.
    Real infrastructure doesn’t exist in a vacuum. You’re deploying into an existing VPC someone else created. You need the latest Ubuntu AMI that changes monthly. You’re reading a secret from a vault. None of these things should you create — you just need to reference them.
    Data sources are queries: Think of them as SELECT statements in SQL. You’re querying existing infrastructure and pulling information into your Terraform code.
    data "aws_ami" "ubuntu" { most_recent = true owners = ["099720109477"] filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-*"] } } This doesn’t create an AMI. It searches for one that already exists and gives you its ID.
    Why data sources matter for infrastructure code: Imagine hardcoding AMI IDs. Next month, there’s a new Ubuntu release with security patches. You have to find the new AMI ID and update your code. Or, use a data source that always finds the latest. Code stays the same, infrastructure stays updated.
    The same principle applies to everything external: VPCs, DNS zones, availability zones, TLS certificates, secrets. If it exists before your Terraform code runs, use a data source.
    The reference difference: Resources are type.name.attribute. Data sources are data.type.name.attribute. That extra data. prefix tells Terraform and you that this is a read operation, not a create operation.
    Data sources run first: Before Terraform creates anything, it runs all data source queries. This makes sense—you need to read information before you can use it to create things.
    String Interpolation: Building Dynamic Infrastructure
    Infrastructure can’t be static. You need bucket names that include environment names. Server names that include region. Tags that reference other resources. String interpolation is how you build these dynamic values.
    The rule is simple: Use ${} when building strings. Don’t use it for direct references.
    bucket = "myapp-${var.environment}-data" # String building - USE ${} ami = data.aws_ami.ubuntu.id # Direct reference - NO ${} Why the distinction? In Terraform’s early days (before version 0.12), you needed "${var.name}" everywhere. It was verbose and ugly. Modern Terraform is cleaner — interpolation only when actually building strings.
    What you can put inside interpolation: Everything. Variables, resource attributes, conditional expressions, function calls. If it produces a value, you can interpolate it.
    name = "${var.project}-${var.environment}-${count.index + 1}" Common beginner mistake: Writing instance_type = "${var.instance_type}". The ${} is unnecessary here — you’re not building a string, just referencing a variable. Just write instance_type = var.instance_type.
    When interpolation shines: Multi-part names. Constructing URLs. Building complex strings from multiple sources. Any time “I need to combine these values into text.”
    Dependencies: The Hidden Graph
    This is where Terraform’s magic happens. You write resources in any order. Terraform figures out the correct creation order automatically. How? By analyzing dependencies.
    Implicit Dependencies: The Automatic Kind
    When you reference one resource’s attribute in another resource, you’ve created
    a dependency. Terraform sees the reference and knows the order.
    Mental model: Think of dependencies as arrows in a diagram. VPC -> Subnet -> Instance. Each arrow means “must exist before.” Terraform builds this diagram automatically by finding all the attribute references in your code.
    resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "app" { vpc_id = aws_vpc.main.id # Reference creates dependency cidr_block = "10.0.1.0/24" } resource "aws_instance" "web" { subnet_id = aws_subnet.app.id # Another dependency ami = "ami-12345678" instance_type = "t2.micro" } You can write these in any order in your files. Terraform sees aws_vpc.main.id referenced in the subnet, and aws_subnet.app.id referenced in the instance. It builds the dependency graph: VPC -> Subnet -> Instance.
    Why this matters: Terraform creates things in parallel when possible. If you define 10 S3 buckets with no dependencies, Terraform creates all 10 simultaneously. If you define a VPC with 10 subnets, it creates the VPC first, then all 10 subnets in parallel.
    The key insight: Every attribute reference is a dependency. resource.name.attribute means “I need this resource to exist first.”
    Explicit Dependencies: The Manual Kind
    Sometimes Terraform can’t detect dependencies automatically. The relationship exists, but there’s no attribute reference to signal it.
    Classic example - IAM: You create an IAM role. You attach a policy to it. You launch an instance with that role. The instance references the role, but not the policy. Terraform might launch the instance before the policy attaches, causing errors.
    resource "aws_instance" "app" { ami = "ami-12345678" instance_type = "t2.micro" depends_on = [aws_iam_role_policy.app_policy] } The depends_on argument says “don’t create this until that other thing exists,” even though we’re not referencing any of its attributes.
    When you need explicit dependencies:
    Timing matters but there’s no direct attribute reference
    Resources must exist in a certain order for external reasons
    You’re working around provider bugs or limitations
    Use sparingly: Explicit dependencies reduce parallelism. Terraform must wait for the dependency before proceeding. Only use them when implicit dependencies won’t work.
    The Dependency Graph
    Behind the scenes, Terraform builds a directed acyclic graph (DAG) of all your resources. Nodes are resources. Edges are dependencies. This graph determines everything:
    What to create first What can be created in parallel What to destroy first when tearing down Directed: Dependencies have direction. A depends on B, not the other way around.
    Acyclic: No loops allowed. If A depends on B, B can’t depend on A (even indirectly). Terraform will error on circular dependencies—they’re impossible to resolve.
    Why you should care: Understanding the dependency graph helps you debug. If Terraform is creating things in a weird order, check the references. If it’s failing on circular dependencies, look for cycles in your attribute references.
    Viewing the graph: Run terraform graph to see the actual graph Terraform built. It’s mostly useful for debugging complex configurations.
    How It All Fits Together
    Every Terraform confguration is a combination of these concepts:
    Resources define what to create Data sources query what exists Interpolation builds dynamic values Dependencies determine the order The workflow: Data sources run first (they’re just queries). Terraform analyzes all resource definitions and builds the dependency graph. It creates resources in the correct order, parallelizing when possible. References between resources become the glue.
    The mental shift: You’re not writing a script that executes top-to-bottom. You’re describing desired state. Terraform figures out how to achieve it. That’s declarative infrastructure.
    Why beginners struggle: They think procedurally. “First create this, then create that.” Terraform doesn’t work that way. You declare everything you want. Terraform analyzes the dependencies and figures out the procedure.
    Common Mistakes and How to Avoid Them
    Mistake 1: Using resource names as identifiers - Resource names in Terraform are local to your code. They’re not the names resources get in your cloud provider. Use tags or name attributes for that.
    Mistake 2: Trying to reference attributes before resources exist - You can’t use aws_instance.web.public_ip in a variable default value. The instance doesn’t exist when Terraform evaluates variables. Use locals or outputs instead.
    Mistake 3: Over-using explicit dependencies - If you’re writing lots of depends_on, you’re probably doing something wrong. Most dependencies should be implicit through attribute references.
    Mistake 4: Confusing data sources with resources - Data sources don’t create anything. If you need to create something, use a resource, not a data source.
    Mistake 5: Hardcoding values that data sources should provide - Don’t hardcode AMI IDs, availability zones, or other values that change. Use data sources to query them dynamically.
    Quick Reference
    Resources:
    resource "type" "name" { argument = "value" } # Reference: type.name.attribute Data Sources:
    data "type" "name" { filter = "value" } # Reference: data.type.name.attribute String Interpolation:
    "prefix-${var.name}-suffix" # Building strings var.name # Direct reference Dependencies:
    # Implicit (automatic) subnet_id = aws_subnet.main.id # Explicit (manual) depends_on = [aws_iam_role.app] Master these four concepts and you’ll understand 80% of Terraform. Everything else builds on this foundation.
    You now understand the core building blocks: resources, data sources, and dependencies. But what if you need to create multiple similar resources? Copy pasting code isn’t the answer. In the next chapter, we’ll explore count, for_each, and conditionals—the tools that make your infrastructure code truly dynamic and scalable.
  6. by: Akhilesh Mishra
    Fri, 28 Nov 2025 15:39:09 +0530

    How does Terraform remember what it created? How does it connect to AWS or Azure? Two concepts answer these questions: State (Terraform’s memory) and Providers (Terraform’s translators).
    Without state and providers, Terraform would be useless. Let’s understand them.
    What is Terraform State?
    State is Terraform’s memory. After terraform apply, it stores what it created in terraform.tfstate.
    Run this example:
    resource "local_file" "example" { content = "Hello from Terraform!" filename = "example.txt" } After terraform apply, check your folder – you’ll see example.txt and terraform.tfstate.
    Expected Files after applyState answers three questions:
    What exists? – Resources Terraform created What changed? – Differences from your current config What to do? – Create, update, or delete? Change the content and run terraform plan. Terraform compares the state with your new config and shows exactly what will change. That’s the power of state.
    Local vs Remote State
    Local state works for solo projects. But teams need remote state stored in shared locations (S3, Azure Storage, Terraform Cloud).
    Remote state with S3:
    terraform { backend "s3" { bucket = "my-terraform-state" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" # Enables locking } } State locking prevents disasters when multiple people run Terraform simultaneously. Person A locks the state, Person B waits. Simple, but crucial for teams.
    Backend Configuration
    Backends tell Terraform where to store state. Local backend uses files on your computer. Remote backends use cloud storage.
    Local backend (default):
    # No configuration needed - stores terraform.tfstate locally S3 backend (AWS):
    terraform { backend "s3" { bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" encrypt = true dynamodb_table = "terraform-locks" } } Azure backend:
    terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "prod.terraform.tfstate" } } GCS backend (Google Cloud):
    terraform { backend "gcs" { bucket = "my-terraform-state" prefix = "prod" } } Terraform Cloud:
    terraform { backend "remote" { organization = "my-org" workspaces { name = "production" } } } Backend Initialization
    After adding backend config, initialize:
    terraform init Terraform downloads backend provider and configures it. If state already exists locally, Terraform asks to migrate it to remote backend.
    Migration example:
    Initializing the backend... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state. Enter a value: yes Type yes and Terraform migrates your state.
    Partial Backend Configuration
    Don’t hardcode sensitive values. Use partial configuration:
    backend.tf:
    terraform { backend "s3" { # Dynamic values provided at init time } } backend-config.hcl:
    bucket = "my-terraform-state" key = "prod/terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" Initialize with config:
    terraform init -backend-config=backend-config.hcl Or via CLI:
    terraform init \ -backend-config="bucket=my-terraform-state" \ -backend-config="key=prod/terraform.tfstate" \ -backend-config="region=us-west-2" Use case: Different backends per environment without changing code.
    Changing Backends
    Switching backends? Change config and re-run init:
    terraform init -migrate-state Terraform detects backend change and migrates state automatically.
    Reconfigure without migration:
    terraform init -reconfigure Starts fresh, doesn’t migrate existing state.
    Backend Best Practices
    For S3: - Enable bucket versioning (rollback bad changes) - Enable encryption at rest - Use DynamoDB for state locking - Restrict bucket access with IAM
    For teams: - Always use remote backends - Never use local backends in production - One state file per environment - Use separate AWS accounts for different environments
    Example S3 setup:
    # Create S3 bucket aws s3api create-bucket \ --bucket my-terraform-state \ --region us-west-2 # Enable versioning aws s3api put-bucket-versioning \ --bucket my-terraform-state \ --versioning-configuration Status=Enabled # Create DynamoDB table for locking aws dynamodb create-table \ --table-name terraform-locks \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --billing-mode PAY_PER_REQUEST What Are Providers?
    Providers are translators. They connect Terraform to services like AWS, Azure, Google Cloud, and 1,000+ others.
    Basic AWS provider:
    provider "aws" { region = "us-west-2" } resource "aws_s3_bucket" "my_bucket" { bucket = "my-unique-bucket-12345" # Must be globally unique } Authentication: Use AWS CLI (aws configure) or environment variables. Never hardcode credentials in your code.
    Provider Requirements and Versions
    Always specify provider versions to prevent surprises:
    terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # 5.x but not 6.0 } } } provider "aws" { region = "us-west-2" } resource "random_string" "suffix" { length = 6 special = false upper = false } resource "aws_s3_bucket" "example" { bucket = "my-bucket-${random_string.suffix.result}" } Version operators: = (exact), >= (minimum), ~> (pessimistic constraint).
    Provider Aliases: Multiple Regions
    Need the same provider with different configurations? Use aliases:
    provider "aws" { region = "us-west-2" } provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "west" { bucket = "west-bucket-12345" } resource "aws_s3_bucket" "east" { provider = aws.east bucket = "east-bucket-12345" } This creates buckets in two different regions. Perfect for multi-region deployments or backups.
    State Best Practices
    Must do: - Add .tfstate to .gitignore (state files contain secrets) - Use remote state with encryption for teams - Enable state locking to prevent conflicts - Enable versioning on state storage (S3, etc.)
    Never do: - Manually edit state files - Commit state to git - Ignore state locking errors - Delete state without backups
    Essential State Commands
    View state:
    terraform state list # List all resources terraform state show aws_s3_bucket.example # Show resource details Modify state:
    terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <resource> <id> # Import existing resource Example - Renaming a resource:
    # Change resource name in code, then: terraform state mv aws_s3_bucket.old aws_s3_bucket.new terraform plan # Should show "No changes" Advanced State Management
    Beyond basic commands, here’s what you need for real-world scenarios:
    Pulling and Pushing State
    Pull state to local file:
    terraform state pull > backup.tfstate Creates a backup. Useful before risky operations.
    Push state from local file:
    terraform state push backup.tfstate Restore state from backup. Use with extreme caution.
    Moving Resources Between Modules
    Refactoring code? Move resources without recreating them:
    # Moving to a module terraform state mv aws_instance.web module.servers.aws_instance.web # Moving from a module terraform state mv module.servers.aws_instance.web aws_instance.web Removing Resources Without Destroying
    Remove from state but keep the actual resource:
    terraform state rm aws_s3_bucket.keep_this Use case: You created a resource with Terraform but now want to manage it manually. Remove it from state, and Terraform forgets about it.
    Importing Existing Resources
    Someone created resources manually? Import them into Terraform:
    # Import an existing S3 bucket terraform import aws_s3_bucket.imported my-existing-bucket # Import an EC2 instance terraform import aws_instance.imported i-1234567890abcdef0 Steps:
    Write the resource block in your code (without attributes) Run import command with resource address and actual ID Run terraform plan to see what attributes are missing Update your code to match the actual resource Run terraform plan again until it shows no changes State Locking Details
    When someone is running Terraform, the state is locked. If a lock gets stuck:
    # Force unlock (dangerous!) terraform force-unlock <lock-id> Only use this if you’re absolutely sure no one else is running Terraform.
    Replacing Providers
    Migrating from one provider registry to another:
    terraform state replace-provider registry.terraform.io/hashicorp/aws \ registry.example.com/hashicorp/aws Useful when moving to private registries.
    State Inspection Tricks
    Show specific resource:
    terraform state show aws_instance.web Shows all attributes of a single resource.
    Filter state list:
    terraform state list | grep "aws_instance" Find all EC2 instances in your state.
    Count resources:
    terraform state list | wc -l How many resources does Terraform manage?
    When Things Go Wrong
    State out of sync with reality?
    terraform refresh # Or newer approach: terraform apply -refresh-only Corrupted state?
    Check your state backups (S3 versioning saves you here) Restore from backup using terraform state push Always test in a non-prod environment first Conflicting states in team?
    Enable state locking (DynamoDB with S3)
    Use remote state, never local for teams - Implement CI/CD that runs Terraform centrally
    Quick Reference
    Backends:
    # S3 terraform { backend "s3" { bucket = "my-state-bucket" key = "terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-locks" } } # Azure terraform { backend "azurerm" { resource_group_name = "terraform-state" storage_account_name = "tfstatestore" container_name = "tfstate" key = "terraform.tfstate" } } terraform init # Initialize backend terraform init -backend-config=file.hcl # Partial config terraform init -migrate-state # Migrate to new backend Providers:
    # Single provider provider "aws" { region = "us-west-2" } # With version constraint terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } # Multiple regions with aliases provider "aws" { alias = "east" region = "us-east-1" } resource "aws_s3_bucket" "east_bucket" { provider = aws.east bucket = "my-bucket" } Common Commands:
    terraform state list # List resources terraform state mv <old> <new> # Rename resource terraform state rm <resource> # Remove from state terraform import <res> <id> # Import existing resource You now understand how Terraform remembers (state) and connects (providers). These two concepts are fundamental to everything else you’ll do with Terraform.
    State and providers handle the “how” and “where” of Terraform. Now let’s explore the “what”—the actual infrastructure you create. In the next chapter, we’ll dive deep into resources, data sources, and the dependency system that makes Terraform intelligent about the order of operations.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.