Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

  • Entries

    45
  • Comments

    0
  • Views

    343
Blogger

Whereis Command Examples

by: Abhishek Prakash
Wed, 05 Mar 2025 20:45:06 +0530


The whereis command helps users locate the binary, source, and manual page files for a given command. And in this tutorial, I will walk you through practical examples to help you understand how to use whereis command.

Unlike other search commands like find that scan the entire file system, whereis searches predefined directories, making it faster and more efficient.

It is particularly useful for system administrators and developers to locate files related to commands without requiring root privileges.

whereis Command Syntax

To use any command to its maximum potential, it is important to know its syntax and that is why I'm starting this tutorial by introducing the syntax of the whereis command:

whereis [OPTIONS] FILE_NAME...

Here,

  • OPTIONS: Flags that modify the search behavior.
  • FILE_NAME: The name of the file or command to locate.

Now, let's take a look at available options of the whereis command:

Flag Description
-b Search only for binary files.
-s Search only for source files.
-m Search only for manual pages.
-u Search for unusual files (files missing one or more of binary, source, or manual).
-B Specify directories to search for binary files (must be followed by -f).
-S Specify directories to search for source files (must be followed by -f).
-M Specify directories to search for manual pages (must be followed by -f).
-f Terminate directory lists provided with -B, -S, or -M, signaling the start of file names.
-l Display directories that are searched by default.

To find all files (binary, source, and manual) related to a command, all you have to do is append the command name to the whereis command as shown here:

whereis command

For example, if I want to locate all files related to bash, then I would use the following:

whereis bash
Locate all files related to a command using whereis command

Here,

  • /usr/bin/bash: Path to the binary file.
  • /usr/share/man/man1/bash.1.gz: Path to the manual page.

2. Search for binary files only

To locate only the binary (executable) file of a command, use the -b flag along with the target command as shown here:

whereis -b command

If I want to search for the binary files for the ls command, then I would use the following:

whereis -b ls
Search for binary files only

3. Search for the manual page only

To locate only the manual page for a specific command, use the -m flag along with the targeted command as shown here:

whereis -m command

For example, if I want to search for the manual page location for the grep command, then I would use the following:

whereis -m grep
Search for manual page only using the whereis command

As you can see, it gave me two locations:

  • /usr/share/man/man1/grep.1.gz: A manual page which can be accessed through man grep command.
  • /usr/share/info/grep.info.gz: An info page that can be accessed through info grep command.

4. Search for source files only

To find only source code files associated with a command, use the -s flag along with the targeted command as shown here:

whereis -s command

For example, if I want to search source files for the gcc, then I would use the following:

whereis -s gcc

My system is fresh and I haven't installed any packages from the source so I was given a blank output.

5. Specify custom directories for searching

To limit your search to specific directories, use options like -B, -S, or -M. For example, if I want to limit my search to the /bin directory for the cp command, then I would use the following command:

whereis -b -B /bin -f cp
Limit search for specific directories using whereis command

Here,

  • -b: This flag tells whereis to search only for binary files (executables), ignoring source and manual files.
  • -B /bin: The -B flag specifies a custom directory (/bin in this case) where whereis should look for binary files. It also limits the search to the /bin directory instead of searching all default directories.
  • -f cp: Without -f, the whereis command would interpret cp as another directory.

6. Identify commands missing certain files (unusual files)

The whereis command can help you find commands that are missing one or more associated files (binary, source, or manual). This is particularly useful for troubleshooting or verifying file completeness.

For example, to search for commands in the /bin directory that is missing manual pages, you first have to change your directory to /bin and then use the -u flag to search for unusual files:

cd /bin
whereis -u -m *
Search for unsual files using whereis command

Wrapping Up...

This was a quick tutorial on how you can use the whereis command in Linux including practical examples and syntax. I hope you will find this guide helpful.

If you have any queries or suggestions, leave us a comment.

by: Sreenath V
Tue, 04 Mar 2025 20:23:37 +0530


Kubernetes is a powerful platform designed to manage and automate the deployment, scaling, and operation of containerized applications. In simple terms, it helps you run and manage your software applications in an organized and efficient way.

kubectl is the command-line tool that helps you manage your Kubernetes cluster. It allows you to deploy applications, manage resources, and get information about your applications. Simply put, kubectl is the main tool you use to communicate with Kubernetes and get things done.

In this article, we will explore essential kubectl commands that will make managing your Kubernetes cluster easier and more efficient.

Essential Kubernetes Concepts

Before diving into the commands, let's quickly review some key Kubernetes concepts to ensure a solid understanding.

  • Pod: The smallest deployable unit in Kubernetes, containing one or more containers that run together on the same node.
  • Node: A physical or virtual machine in the Kubernetes cluster where Pods are deployed.
  • Services: An abstraction that defines a set of Pods and provides a stable network endpoint to access them.
  • Deployment: A controller that manages the desired state and lifecycle of Pods by creating, updating, and deleting them.
  • Namespace: A logical partition in a Kubernetes cluster to isolate and organize resources for different users or teams.

General Command Line Options

This section covers various optional flags and parameters that can be used with different kubectl commands. These options help customize the output format, specify namespaces, filter resources, and more, making it easier to manage and interact with your Kubernetes clusters.

The get command in kubectl is used to retrieve information about Kubernetes resources. It can list various resources such as pods, services, nodes, and more.

To retrieve a list of all the pods in your Kubernetes cluster in JSON format,

kubectl get pods -o json

List all the pods in the current namespace and output their details in YAML format.

kubectl get pods -o yaml

Output the details in plain-text format, including the node name for each pod,

kubectl get pods -o wide

List all the pods in a specific namespace using the -n option:

kubectl get pods -n <namespace_name>

To create a Kubernetes resource from a configuration file, us the command:

kubectl create -f <filename>

To filter logs by a specific label, you can use:

kubectl logs -l <label_filter>

For example, to get logs from all pods labeled app=myapp, you would use:

kubectl logs -l app=myapp

For quick command line help, always use the -h option.

kubectl -h

Create and Delete Kubernetes Resources

In Kubernetes, you can create resources using the kubectl create command, update or apply changes to existing resources with the kubectl apply command, and remove resources with the kubectl delete command. These commands allow you to manage the lifecycle of your Kubernetes resources effectively and efficiently.

The apply and create are two different approaches to create resources in Kubernetes. While the apply follows a declarative approach, create follows an imperative approach.

Learn about these different approaches in our dedicated article.

kubectl apply vs create: What’s the Difference?
Two different approaches for creating resources in Kubernetes cluster. What’s the difference?

To apply a configuration file to a pod, use the command:

 kubectl apply -f <JSON/YAML configuration file>

If you have multiple JSON/YAML configuration files, you can use glob pattern matching here:

 kubectl apply -f '*.json'

To create a new Kubernetes resource using a configuration file,

kubectl create -f <configuration file>

The -f option can receive directory values or configuration file URL to create resource.

kubectl create -f <directory>

OR

kubectl create -f <URL to files>

The delete option is used to delete resources by file names, resources and names, or by resources and label selector.

To delete resources using the type and name specified in the configuration file,

 kubectl delete -f <configuration file>

Cluster Management and Context Commands

Cluster management in Kubernetes refers to the process of querying and managing information about the Kubernetes cluster itself. According to the official documentation, it involves various commands to display endpoint information, view and manage cluster configurations, list API resources and versions, and manage contexts.

The cluster-info command displays the endpoint information about the master and services in the cluster.

kubectl cluster-info

To print the client and server version information for the current context, use:

kubectl version

To display the merged kubeconfig settings,

kubectl config view

To extract and display the names of all users from the kubeconfig file, you can use a jsonpath expression.

kubectl config view -o jsonpath='{.users[*].name}'

Display the current context that kubectl is using,

kubectl config current-context

You can display a list of contexts with the get-context option.

kubectl config get-contexts

To set the default context, use:

kubectl config use-context <context-name>

Print the supported API resources on the server.

kubectl api-resources

It includes core resources like pods, services, and nodes, as well as custom resources defined by users or installed by operators.

You can use the api-versions command to print the supported API versions on the server in the form of "group/version". This command helps you identify which API versions are available and supported by your Kubernetes cluster.

kubectl api-versions

The --all-namespaces option available with the get command can be used to list the requested object(s) across all namespaces. For example, to list all pods existing in all namespaces,

kubectl get pods --all-namespaces

Daemonsets

A DaemonSet in Kubernetes ensures that all (or some) Nodes run a copy of a specified Pod, providing essential node-local facilities like logging, monitoring, or networking services. As nodes are added or removed from the cluster, DaemonSets automatically add or remove Pods accordingly. They are particularly useful for running background tasks on every node and ensuring node-level functionality throughout the cluster.

You can create a new DaemonSet with the command:

kubectl create daemonset <daemonset_name>

To list one or more DaemonSets, use the command:

kubectl get daemonset

The command,

kubectl edit daemonset <daemonset_name>

will open up the specified DaemonSet in the default editor so you can edit and update the definition.

To delete a daemonset,

kubectl delete daemonset <daemonset_name>

You can check the rollout status of a daemonset with the kubectl rollout command:

kubectl rollout status daemonset

The command below provides detailed information about the specified DaemonSet in the given namespace:

kubectl describe ds <daemonset_name> -n <namespace_name>

Deployments

Kubernetes deployments are essential for managing and scaling applications. They ensure that the desired number of application instances are running at all times, making it easy to roll out updates, perform rollbacks, and maintain the overall health of your application by automatically replacing failed instances.

In other words, Deployment allows you to manage updates for Pods and ReplicaSets in a declarative manner. By specifying the desired state in the Deployment configuration, the Deployment Controller adjusts the actual state to match at a controlled pace. You can use Deployments to create new ReplicaSets or replace existing ones while adopting their resources seamlessly. For more details, refer to StatefulSet vs. Deployment.

To list one or more deployments:

kubectl get deployment

To display detailed information about the specified deployment, including its configuration, events, and status,

kubectl describe deployment <deployment_name>

The below command opens the specified deployment configuration in the default editor, allowing you to make changes to its configuration:

kubectl edit deployment <deployment_name>

To create a deployment using kubectl, specify the image to use for the deployment:

kubectl create deployment <deployment_name> --image=<image_name>

You can delete a specified deployment and all of its associated resources, such as Pods and ReplicaSets by using the command:

kubectl delete deployment <deployment_name>

To check the rollout status of the specified deployment and providing information about the progress of the deployment's update process,

kubectl rollout status deployment <deployment_name>

Perform a rolling update in Kubernetes by setting the container image to a new version for a specific deployment.

kubectl set image deployment/<deployment name> <container name>=image:<new image version>

To roll back the specified deployment to the previous revision (undo),

kubectl rollout undo deployment/<deployment name>

The command below will forcefully replace a resource from a configuration file:

kubectl replace --force -f <configuration file>

Retrieving and Filtering Events

In Kubernetes, events are a crucial component for monitoring and diagnosing the state of your cluster. They provide real-time information about changes and actions happening within the system, such as pod creation, scaling operations, errors, and warnings.

Use the command:

kubectl get events

To retrieve and list recent events for all resources in the system, providing valuable information about what has happened in your cluster.

To filter and list only the events of type "Warning," thereby providing insights into any potential issues or warnings in your cluster,

kubectl get events --field-selector type=Warning

You can retrieve and list events sorted by their creation timestamp. This allows you to view events in chronological order.

kubectl get events --sort-by=.metadata.creationTimestamp

To lists events, excluding those related to Pod events,

kubectl get events --field-selector involvedObject.kind!=Pod

This helps you focus on events for other types of resources.

To list events specifically for a node with the given name,

kubectl get events --field-selector involvedObject.kind=Node, involvedObject.name=<node_name>

You can filter events, excluding those that are of the "Normal" type, allowing you to focus on warning and error events that may require attention:

kubectl get events --field-selector type!=Normal

Managing Logs

Logs are essential for understanding the real-time behavior and performance of your applications. They provide a record of activity and outputs generated by containers and pods, which can be invaluable for debugging and monitoring purposes.

To print the logs for the specified pod:

kubectl logs <pod_name>

To print the logs for the specified pod since last hour:

kubectl logs --since=1h <pod_name>

You can read the most recent 50 lines of logs for the specified pod using the --tail option.

kubectl logs --tail=50 <pod_name>

The command below streams and continuously displays the logs of the specified pod, optionally filtered by the specified container:

kubectl logs -f <pod_name> [-c <container_name>]

For example, as per the official documentation,

kubectl logs -f -c ruby web-1

Begin streaming the logs of the ruby container in pod web-1.

To continuously display the logs of the specified pod in real-time,

kubectl logs -f <pod_name>

You can fetch the logs up to the current point in time for a specific container within the specified pod using the command:

kubectl logs -c <container_name> <pod_name>

To save the logs for the specified pod to a file,

kubectl logs <pod_name> > pod.log

To print the logs for the previous instance of the specified pod:

kubectl logs --previous <pod_name>

This is particularly useful for troubleshooting and analyzing logs from a previously failed pod.

Namespaces

In Kubernetes, namespaces are used to divide and organize resources within a cluster, creating separate environments for different teams, projects, or applications. This helps in managing resources, access permissions, and ensuring that each group or application operates independently and securely.

To create a new namespace with the specified name in your Kubernetes cluster:

kubectl create namespace <namespace_name>

To list all namespaces in your Kubernetes cluster, use the command:

kubectl get namespaces

You can get a detailed description of the specified namespace, including its status, resource quotas using the command:

kubectl describe namespace <namespace_name>

To delete the specified namespace along with all the resources contained within it:

kubectl delete namespace <namespace_name>

The command

kubectl edit namespace <namespace_name>

opens the default editor on your machine with the configuration of the specified namespace, allowing you to make changes directly.

To display resource usage (CPU and memory) for all pods within a specific namespace, you can use the following command:

kubectl top pods --namespace=<namespace_name>

Nodes

In Kubernetes, nodes are the fundamental building blocks of the cluster, serving as the physical or virtual machines that run your applications and services.

To update the taints on one or more nodes,

kubectl taint node <node_name>

List all nodes in your Kubernetes cluster:

kubectl get node

Remove a specific node from your Kubernetes cluster,

kubectl delete node <node_name>

Display resource usage (CPU and memory) for all nodes in your Kubernetes cluster:

kubectl top nodes

List all pods running on a node with a specific name:

kubectl get pods -o wide | grep <node_name>

Add or update annotations on a specific node:

kubectl annotate node <node_name> <key>=<value>
📋
Annotations are key-value pairs that can be used to store arbitrary non-identifying metadata.

Mark a node as unschedulable (no new pods will be scheduled on the specified node).

kubectl cordon node <node_name>

Mark a previously cordoned (unschedulable) node as schedulable again:

kubectl uncordon node <node_name>

Safely evict all pods from the specified node in preparation for maintenance or decommissioning:

kubectl drain node <node_name>

Add or update labels on a specific node in your Kubernetes cluster:

kubectl label node <node_name> <key>=<value>

Pods

A pod is the smallest and simplest unit in the Kubernetes object model that you can create or deploy. A pod represents a single instance of a running process in your cluster and can contain one or more containers. These containers share the same network namespace, storage volumes, and lifecycle, allowing them to communicate with each other easily and share resources.

Pods are designed to host tightly coupled application components and provide a higher level of abstraction for deploying, scaling, and managing applications in a Kubernetes environment. Each pod is scheduled on a node, where the containers within it are run and managed together as a single, cohesive unit.

List all pods in your Kubernetes cluster:

kubectl get pods

List all pods in your Kubernetes cluster and sort them by the restart count of the first container in each pod:

kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

List all pods in your Kubernetes cluster that are currently in the "Running" phase:

kubectl get pods --field-selector=status.phase=Running

Delete a specific pod from your Kubernetes cluster:

kubectl delete pod <pod_name>

Display detailed information about a specific pod in your Kubernetes cluster:

kubectl describe pod <pod_name>

Create a pod using the specifications provided in a YAML file:

kubectl create -f pod.yaml

OR

kubectl apply -f pod.yaml

To execute a command in a specific container within a pod in your Kubernetes cluster:

kubectl exec <pod_name> -c <container_name> <command>

Start an interactive shell session in a container within a specified pod:

# For Single Container Pods
kubectl exec -it <pod_name> -- /bin/sh

# For Multi-container pods,
kubectl exec -it <pod_name> -c <container_name> -- /bin/sh

Display resource (CPU and memory) usage statistics for all pods in your Kubernetes cluster:

kubectl top pods

Add or update annotations on a specific pod:

kubectl annotate pod <pod_name> <key>=<value>

To add or update the label of a pod:

kubectl label pod <pod_name> new-label=<label name>

List all pods in your Kubernetes cluster and display their labels:

kubectl get pods --show-labels

Forward one or more local ports to a pod in your Kubernetes cluster, allowing you to access the pod's services from your local machine:

kubectl port-forward <pod_name> <port_number_to_listen_on>:<port_number_to_forward_to>

Replication Controllers

Replication Controller (RC) ensures that a specified number of pod replicas are running at any given time. If any pod fails or is deleted, the Replication Controller automatically creates a replacement. This self-healing mechanism enables high availability and scalability of applications.

To list all Replication Controllers in your Kubernetes cluster

kubectl get rc

List all Replication Controllers within a specific namespace:

kubectl get rc --namespace=”<namespace_name>”

ReplicaSets

ReplicaSet is a higher-level concept that ensures a specified number of pod replicas are running at any given time. It functions similarly to a Replication Controller but offers more powerful and flexible capabilities.

List all ReplicaSets in your Kubernetes cluster.

kubectl get replicasets

To display detailed information about a specific ReplicaSet:

kubectl describe replicasets <replicaset_name>

Scale the number of replicas for a specific resource, such as a Deployment, ReplicaSet, or ReplicationController, in your Kubernetes cluster.

kubectl scale --replicas=<number_of_replicas> <resource_type>/<resource_name>

Secrets

Secrets are used to store and manage sensitive information such as passwords, tokens, and keys.

Unlike regular configuration files, Secrets help ensure that confidential data is securely handled and kept separate from application code.

Secrets can be created, managed, and accessed within the Kubernetes environment, providing a way to distribute and use sensitive data without exposing it in plain text.

To create a Secret,

kubectl create secret (docker-registry | generic | tls)

List all Secrets in your Kubernetes cluster:

kubectl get secrets

Display detailed information about a specific Secret:

kubectl describe secret <secret_name>

Delete a specific Secret from your Kubernetes cluster:

kubectl delete secret <secret_name>

Services

Services act as stable network endpoints for a group of pods, allowing seamless communication within the cluster. They provide a consistent way to access pods, even as they are dynamically created, deleted, or moved.

By using a Service, you ensure that your applications can reliably find and interact with each other, regardless of the underlying pod changes.

Services can also distribute traffic across multiple pods, providing load balancing and improving the resilience of your applications.

To list all Services in your Kubernetes cluster:

kubectl get services

To display detailed information about a specific Service:

kubectl describe service <service_name>

Create a Service that exposes a deployment:

kubectl expose deployment <deployment_name> --port=<port> --target-port=<target_port> --type=<type>

Edit the configuration of a specific Service:

kubectl edit service <service_name>

Service Accounts

Service Accounts provide an identity for processes running within your cluster, enabling them to interact with the Kubernetes API and other resources. By assigning specific permissions and roles to Service Accounts, you can control access and limit the actions that pods and applications can perform, enhancing the security and management of your cluster.

Service Accounts are essential for managing authentication and authorization, ensuring that each component operates with the appropriate level of access and adheres to the principle of least privilege.

To list all Service Accounts in your Kubernetes cluster:

kubectl get serviceaccounts

Display detailed information about a specific Service Account:

kubectl describe serviceaccount <serviceaccount_name>

Next is replacing a service account. Before replacing, you need to export the existing Service Account definition to a YAML file.

kubectl get serviceaccount <serviceaccount_name> -o yaml > serviceaccount.yaml

Once you made changes to the YAML file, replace the existing Service Account with the modified one:

kubectl replace -f serviceaccount.yaml

Delete a specific Service Account from your Kubernetes cluster:

kubectl delete serviceaccount <service_account_name>

StatefulSet

StatefulSet is a specialized workload controller designed for managing stateful applications. Unlike Deployments, which are suitable for stateless applications, StatefulSets provide guarantees about the ordering and uniqueness of pods.

Each pod in a StatefulSet is assigned a unique, stable identity and is created in a specific order. This ensures consistency and reliability for applications that require persistent storage, such as databases or distributed systems.

StatefulSets also facilitate the management of pod scaling, updates, and rollbacks while preserving the application's state and data.

To list all StatefulSets in your Kubernetes cluster:

kubectl get statefulsets

To delete a specific StatefulSet from your Kubernetes cluster without deleting the associated pods:

kubectl delete statefulset <stateful_set_name> --cascade=false

💬 Hope you like this quick overview of the kubectl commands. Please let me know if you have any questions or suggestions.

by: Abhishek Prakash
Fri, 28 Feb 2025 19:22:01 +0530


What career opportunities are available for someone starting with Linux? I am talking about entering this field and that's why I left out roles like SRE from this list. I would appreciate your feedback on it if you are already working in the IT industry. Let's help out our juniors.

What Kind of Job Can You Get if You Learn Linux?
While there are tons of job roles created around Linux, here are the ones that you can choose for an entry level career.

Here are the other highlights of this edition of LHB Linux Digest:

  • Zed IDE
  • Essential Docker commands
  • Self hosted project management tool
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

📖 Linux Tips and Tutorials

Learn to increase (or perhaps decrease) swap on Ubuntu Linux. This should work on other distros too if they use swap file instead of swap partition.

How to Increase Swap Size on Ubuntu Linux
In this quick tip, you’ll learn to increase the swap size on Ubuntu and other Linux distributions.
by: Abhishek Prakash
Thu, 20 Feb 2025 17:48:14 +0530


Linux is the foundation of many IT systems, from servers to cloud platforms. Mastering Linux and related tools like Docker, Kubernetes, and Ansible can unlock career opportunities in IT, system administration, networking, and DevOps.

I mean, that's one of the reasons why many people use Linux.

The next question is, what kinds of job roles can you get if you want to begin a career with Linux?

Let me share the job roles, required skills, certifications, and resources to help you transition into a Linux-based career.

📋
There are many more kinds of job roles out there. Cloud Engineer, Site Reliability Engineer (SRE) etc. The ones I discuss here are primarily focused on entry level roles.

1. IT Technician

IT Technicians are responsible for maintaining computer systems, troubleshooting hardware/software issues, and supporting organizational IT needs.

They ensure smooth daily operations by resolving technical problems efficiently. So if you are a beginner and just want to get started in IT field, IT technician is one of the most basic yet important roles.

Responsibilities:

  • Install and configure operating systems, software, and hardware.
  • Troubleshoot system errors and repair equipment.
  • Provide user support for technical issues.
  • Monitor network performance and maintain security protocols.

Skills Required:

  • Basic Linux knowledge (file systems, permissions).
  • Networking fundamentals (TCP/IP, DNS).
  • Familiarity with common operating systems like Windows and MacOS.

Certifications:

  • CompTIA Linux+ (XK0-005): Validates foundational Linux skills such as system management, security, scripting, and troubleshooting. Recommended for entry-level roles.
  • CompTIA A+: Focuses on hardware/software troubleshooting and is ideal for beginners.
📋
These are absolute entry-level job role and some would argue that this role is shrinking or at least there won't be as many opportunities as it used to be earlier. Also, it might not be a high-paying job.

2. System Administrator

System administrators manage servers, networks, and IT infrastructure and on a personal level, this is my favourite role.

Being a System admin, you are supposed to ensure system reliability, security, and efficiency by configuring software/hardware and automating repetitive tasks.

Responsibilities:

  • Install and manage operating systems (e.g., Linux).
  • Set up user accounts and permissions.
  • Monitor system performance and troubleshoot outages.
  • Implement security measures like firewalls.

Skills Required:

  • Proficiency in Linux commands and shell scripting.
  • Experience with configuration management tools (e.g., Ansible).
  • Knowledge of virtualization platforms (e.g., VMware).

Certifications:

  • Red Hat Certified System Administrator (RHCSA): Focuses on core Linux administration tasks such as managing users, storage configuration, basic container management, and security.
  • LPIC-1: Linux Administrator: Covers fundamental skills like package management and networking.
📋
This is a classic Linux job role. Although, the opportunities started shrinking as the 'cloud' took over. This is why RHCSA and other sysadmin certifications have started including topics like Ansible in the mix.

3. Network Engineer

Being a network engineer, you are responsible for designing, implementing, and maintaining an organization's network infrastructure. In simple terms, you will be called first if there is any network-related problem ranging from unstable networks to misconfigured networks.

Responsibilities:

  • Configure routers, switches, firewalls, and VPNs.
  • Monitor network performance for reliability.
  • Implement security measures to protect data.
  • Document network configurations.

Skills Required:

  • Advanced knowledge of Linux networking (firewalls, IP routing).
  • Familiarity with protocols like BGP/OSPF.
  • Scripting for automation (Python or Bash).

Certifications:

  • Cisco Certified Network Associate (CCNA): Covers networking fundamentals such as IP connectivity, network access, automation, and programmability. It’s an entry-level certification for networking professionals.
  • CompTIA Network+: Focuses on troubleshooting network issues and implementing secure networks.
📋
A classic Linux-based job role that goes deep into networking. Many enterprises have their in-house network engineers. Other than that, data centers and cloud providers also employ network engineers.

4. DevOps Engineer

DevOps Engineers bridge development and operations teams to streamline software delivery. This is more of an advanced role where you will be focusing on automation tools like Docker for containerization and Kubernetes for orchestration.

Responsibilities:

  • Automate CI/CD pipelines using tools like Jenkins.
  • Deploy containerized applications using Docker.
  • Manage Kubernetes clusters for scalability.
  • Optimize cloud-based infrastructure (AWS/Azure).

Skills Required:

  • Strong command-line skills in Linux.
  • Proficiency in DevOps tools (e.g., Terraform).
  • Understanding of cloud platforms.

Certifications:

  • Certified Kubernetes Administrator (CKA): Validates expertise in managing Kubernetes clusters by covering topics like installation/configuration, networking, storage management, and troubleshooting.
  • AWS Certified DevOps Engineer – Professional: Focuses on automating AWS deployments using DevOps practices.
📋
Newest but the most in-demand job role these days. A certification like CKA and CKD can help you skip the queue and get the job. It also pays more than other discussed roles here.
Linux for DevOps: Essential Knowledge for Cloud Engineers
Learn the essential concepts, command-line operations, and system administration tasks that form the backbone of Linux in the DevOps world.
Certification Role Key Topics Covered Cost Validity
CompTIA Linux+ IT Technician System management, security basics, scripting $207 3 Years
Red Hat Certified System Admin System Administrator User management, storage configuration, basic container management $500 3 Years
Cisco CCNA Network Engineer Networking fundamentals including IP connectivity/security $300 3 Years
Certified Kubernetes Admin DevOps Engineer Cluster setup/management, troubleshooting Kubernetes environments $395 3 Years
Linux Foundation Kubernetes Certification Discount

Skills required across roles

Here, I have listed the skills that are required for all the 4 roles listed above:

Core skills:

  1. Command-line proficiency: Navigating file systems and managing processes.
  2. Networking basics: Understanding DNS, SSH, and firewalls.
  3. Scripting: Automating tasks using Bash or Python.

Advance skills:

  1. Configuration management: Tools like Ansible or Puppet.
  2. Containerization: Docker for packaging applications.
  3. Orchestration: Kubernetes for managing containers at scale.

Free resources to Learn Linux

For beginners:

  1. Bash Scripting for Beginners: Our in-house free course for command-line basics.
  2. Linux Foundation Free Courses: Covers Linux basics like command-line usage.
  3. LabEx: Offers hands-on labs for practising Linux commands.
  4. Linux for DevOps: Essential Linux knowledge for cloud and DevOps engineers.
  5. Learn Docker: Our in-house effort to provide basic Docker tutorials for free.

For advanced topics:

  1. KodeKloud: Interactive courses on Docker/Kubernetes with real-world scenarios.
  2. Coursera: Free trials for courses like "Linux Server Management."
  3. RHCE Ansible EX294 Exam Preparation Course: Our editorial effort is to provide a free Ansible course covering basic to advanced Ansible.

Conclusion

I would recommend you start by mastering the basics of Linux commands before you dive into specialized tools like Docker or Kubernetes.

We have a complete course on Linux command line fundamentals. No matter which role you are preparing for, you cannot ignore the basics.

Linux for DevOps: Essential Knowledge for Cloud Engineers
Learn the essential concepts, command-line operations, and system administration tasks that form the backbone of Linux in the DevOps world.

Use free resources to build your knowledge base and validate your skills through certifications tailored to your career goals. With consistent learning and hands-on practice, you can secure a really good role in the tech industry!

by: Abhishek Kumar
Wed, 19 Feb 2025 15:50:46 +0530


If you've ever wanted a secure way to access your home network remotely, whether for SSH access, private browsing, or simply keeping your data encrypted on public Wi-Fi, self-hosting a VPN is the way to go.

While commercial VPN services exist, hosting your own gives you complete control and ensures your data isn't being logged by a third party.

💡
Self-hosting a VPN requires opening a port on your router, but some ISPs, especially those using CGNAT, won't allow this, leaving you without a publicly reachable IP. If that's the case, you can either check if your ISP offers a static IP (sometimes available with business plans) or opt for a VPS instead.

I’m using a Linode VPS for this guide, but if you're running this on your home network, make sure your router allows port forwarding.

Customer Referral Landing Page - $100
Cut Your Cloud Bills in Half Deploy more with Linux virtual machines, global infrastructure, and simple pricing. No surprise bills, no lock-in, and the

Get started on Linode with a $100, 60-day credit for new users.

What is PiVPN?

PiVPN is a lightweight, open-source project designed to simplify setting up a VPN server on a Raspberry Pi or any Debian-based system.

It supports WireGuard and OpenVPN, allowing you to create a secure, private tunnel to your home network or VPS.

The best part? PiVPN takes care of the heavy lifting with a one-command installer and built-in security settings.

With PiVPN, you can:

  • Securely access your home network from anywhere
  • Encrypt your internet traffic on untrusted networks (coffee shops, airports, etc.)
  • Avoid ISP snooping by routing traffic through a VPS
  • Run it alongside Pi-hole for an ad-free, secure browsing experience

PiVPN makes self-hosting a VPN accessible, even if you’re not a networking expert. Now, let’s get started with setting it up.

Installing PiVPN

Now that we've handled the prerequisites, it's time to install PiVPN. The installation process is incredibly simple.

Open a terminal on your server and run:

curl -L https://install.pivpn.io | bash

This command will launch an interactive installer that will guide you through the setup.

1. Assign a Static IP Address

You'll be prompted to ensure your server has a static IP. If your local IP changes, your port forwarding rules will break, rendering the VPN useless.

If running this on a VPS, the external IP is usually static.

2. Choose a User

Select the user that PiVPN should be installed under. If this is a dedicated server for VPN use, the default user is fine.

3. Choose a VPN Type: WireGuard or OpenVPN

PiVPN supports both WireGuard and OpenVPN. For this guide, I'll go with WireGuard, but you can choose OpenVPN if needed.

4. Select the VPN Port

You'll need to specify the port, for WireGuard, this defaults to 51820 (this is the same port which you need to forward on your router)

5. Choose a DNS Provider

PiVPN will ask which DNS provider to use. If you have a self-hosted DNS, select "Custom" and enter the IP. Otherwise, pick from options like Google, Cloudflare, or OpenDNS.

6. Public IP vs. Dynamic DNS

If you have a static public IP, select that option. If your ISP gives you a dynamic IP, use a Dynamic DNS (DDNS) service to map a hostname to your changing IP.

7. Enable Unattended Upgrades

For security, it's a good idea to enable automatic updates. VPN servers are a crucial entry point to your network, so keeping them updated reduces vulnerabilities.

After these steps, PiVPN will complete the installation.

Creating VPN profiles

Now that the VPN is up and running, we need to create client profiles for devices that will connect to it.

Run the following command:

pivpn add

You'll be asked to enter a name for the client profile.

Once created, the profile file will be stored in:

/home/<user>/configs/

Connecting devices

On Mobile (WireGuard App)

  1. Install the WireGuard app from the Play Store or App Store.
  2. Transfer the .conf file to your phone (via email, Airdrop, or a file manager).
  3. Import it into the WireGuard app and activate the connection.

On Desktop (Linux)

  • Install the WireGuard client for your OS.
  • Copy the .conf file into the /etc/wireguard directory.
  • Connect to the VPN.

Conclusion

And just like that, we now have our own self-hosted VPN up and running! No more sketchy public Wi-Fi risks, no more ISP snooping, and best of all, full control over our own encrypted tunnel.

Honestly, PiVPN makes the whole process ridiculously easy compared to manually setting up WireGuard or OpenVPN from scratch.

It took me maybe 15–20 minutes from start to finish, and that’s including the time spent debating whether I should stick to my usual WireGuard setup or try OpenVPN just for fun.

If you’ve been thinking about rolling your own VPN, I’d say go for it. It’s a great weekend project that gives you actual privacy, plus it’s a fun way to dive into networking without things getting overwhelming.

Now, I’m curious, do you already use a self-hosted VPN, or are you still sticking with a paid service?

And hey, if you’re looking for a simpler “click-and-go” solution, We've also put together a list of the best VPN services, check it out if self-hosting isn’t your thing!

by: Linux Wolfman
Sun, 16 Feb 2025 23:47:19 +0000


Linux is a free and open source technology, but you will need to choose a Linux distribution to actually use it as a working solution. Therefore in this blog post we will review the best Linux distributions you can choose in 2025 so you can select what you need based on the latest information.

Best Linux for the Enterprise: Red Hat Enterprise Linux

Red Hat Enterprise Linux (RHEL) is the best Linux distribution for enterprises due to its focus on stability, security, and long-term support. It offers a 10-year lifecycle with regular updates, ensuring reliability for mission-critical applications. RHEL’s advanced security features, like SELinux, and compliance with industry standards make it ideal for industries such as finance and government. Its extensive ecosystem, integration with cloud platforms, and robust support from Red Hat’s expert team further enhance its suitability for large-scale, hybrid environments. RHEL is also best because of industry standardization in that it is commonly used in the enterprise setting so many employees are comfortable using it in this context.

Best Linux for the Developers and Programmers: Debian

Debian Linux is highly regarded for developers and programmers due to its vast software repository, offering over 59,000 packages, including the latest tools and libraries for coding. Its stability and reliability make it a dependable choice for development environments, while its flexibility allows customization for specific needs. Debian’s strong community support, commitment to open-source principles, and compatibility with multiple architectures further enhance its appeal for creating, testing, and deploying software efficiently. Debian is also known for their free software attitude ensuring that the OS is completely intellectual property free which helps developers to make sure what they are building is portable and without any hooks or gotchyas.

Best Alternative to Red Hat Enterprise Linux: Rocky Linux

Rocky Linux is the best alternative to Red Hat Enterprise Linux (RHEL) because it was designed as a 1:1 binary-compatible replacement after CentOS shifted to a rolling-release model. It provides enterprise-grade stability, long-term support, and a focus on security, mirroring RHEL’s strengths. As a community-driven project, Rocky Linux is free, ensuring cost-effectiveness without sacrificing reliability. Its active development and commitment to staying aligned with RHEL updates make it ideal for enterprises seeking a no-compromise, open-source solution.

Best Linux for Laptops and Home Computers: Ubuntu

Ubuntu is the best Linux distro for laptops and home computers due to its user-friendly interface, making it accessible for beginners and efficient for experienced users. It offers excellent hardware compatibility, ensuring seamless performance on a wide range of devices. Ubuntu’s regular updates, extensive software repository, and strong community support provide a reliable and customizable experience. Additionally, its focus on power management and pre-installed drivers optimizes it for laptop use, while its polished desktop environment enhances home computing.

Best Linux for Gaming: Pop!_OS

Pop!_OS is the best Linux distro for gaming due to its seamless integration of gaming tools, excellent GPU support, and user-friendly design. Built on Ubuntu, it offers out-of-the-box compatibility with NVIDIA and AMD graphics cards, including easy driver switching for optimal performance. Pop!_OS includes Steam pre-installed and supports Proton, ensuring smooth gameplay for both native Linux and Windows games. Its intuitive interface, customizable desktop environment, and focus on performance tweaks make it ideal for gamers who want a reliable, hassle-free experience without sacrificing versatility.

Best Linux for Privacy: PureOS

PureOS is the best Linux distro for privacy due to its unwavering commitment to user freedom and security. Developed by Purism, it is based on Debian and uses only free, open-source software, eliminating proprietary components that could compromise privacy. PureOS integrates privacy-focused tools like the Tor Browser and encryption utilities by default, ensuring anonymous browsing and secure data handling. Its design prioritizes user control, allowing for customizable privacy settings, while regular updates maintain robust protection. Additionally, its seamless integration with Purism’s privacy-focused hardware enhances its effectiveness, making it ideal for privacy-conscious users seeking a stable and trustworthy operating system.

Best Linux for building Embedded Systems or into Products: Alpine Linux

Alpine Linux is the best Linux distribution for building embedded systems or integrating into products due to its unmatched combination of lightweight design, security, and flexibility. Its minimal footprint, achieved through musl libc and busybox, ensures efficient use of limited resources, making it ideal for devices like IoT gadgets, wearables, and edge hardware. Alpine prioritizes security with features like position-independent executables, a hardened kernel, and a focus on simplicity, reducing attack surfaces. The apk package manager enables fast, reliable updates, while its ability to run entirely in RAM ensures quick boot times and resilience. Additionally, Alpine’s modular architecture and active community support make it highly customizable, allowing developers to tailor it precisely to their product’s needs.

Other Notable Linux Distributions

Other notable distributions that did not win or category awards above include: Linux Mint, Arch Linux, Manjaro, Fedora, OpenSuse, and Alma Linux. We will briefly describe them and their benefits.

Linux Mint: Known for its user-friendly interface and out-of-the-box multimedia support, Linux Mint is good at providing a stable, polished experience for beginners and those transitioning from Windows or macOS. Its Cinnamon desktop environment is intuitive, and it excels in home computing and general productivity. Linux Mint is based on Ubuntu, it builds upon Ubuntu’s stable foundation, using its repositories and package management system, while adding its own customizations to enhance the experience for beginners and general users.

Arch Linux: Known for its minimalist, do-it-yourself approach, Arch Linux is good at offering total control and customization for advanced users. It uses a rolling-release model, ensuring access to the latest software, and is ideal for those who want to build a system tailored to their exact needs. Arch Linux is an original, independent Linux distribution, not derived from any other system. It uses its own unique package format (.pkg.tar.zst) and is built from the ground up with a focus on simplicity, minimalism, and user control. Arch has a large, active community that operates independently from major distributions like RHEL, Debian, and SUSE, and it maintains its own repositories and development ecosystem, emphasizing a rolling-release model and the Arch User Repository (AUR) for community-driven software.

Manjaro: Known for its Arch-based foundation with added user-friendliness, Manjaro is good at balancing cutting-edge software with ease of use. It provides pre-configured desktops, automatic hardware detection, and a curated repository, making it suitable for users who want Arch’s power without the complexity.

Fedora: Known for its innovation and use of bleeding-edge technology, Fedora is good at showcasing the latest open-source advancements while maintaining stability. Backed by Red Hat, it excels in development, testing new features, and serving as a reliable platform for professionals and enthusiasts.

openSUSE: Known for its versatility and powerful configuration tools like YaST, openSUSE is good at catering to both beginners and experts. It offers two models—Tumbleweed (rolling release) and Leap (stable)—making it ideal for diverse use cases, from servers to desktops.

AlmaLinux: Known as a free, community-driven alternative to Red Hat Enterprise Linux (RHEL), AlmaLinux is good at providing enterprise-grade stability and long-term support. It ensures 1:1 binary compatibility with RHEL, making it perfect for businesses seeking a cost-effective, reliable server OS.

Conclusion

By reviewing the criteria above you should be able to pick the best Linux distribution for you in 2025!

by: Abhishek Prakash
Wed, 12 Feb 2025 15:59:44 +0530


Guess who's rocking Instagram? That's right. It's Linux Handbook ;)

If you are an active Instagram user, do follow us as I am posting interesting graphics and video memes.

Here are the other highlights of this edition of LHB Linux Digest:

  • Vi vs Vim
  • Flexpilot IDE
  • Self hosted time tracker
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️ Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.

📖 Linux Tips and Tutorials

Get more out of your bash history with these tips.

5 Simple Bash History Tricks Every Linux User Should Know
Effectively using bash history will save you plenty of time in the Linux terminal.
by: Abhishek Prakash
Wed, 12 Feb 2025 09:14:09 +0530


I have encountered situations where I had executed vi and it still runs Vim instead of the program that I had requested (Vi). That was just one part of the confusion.

I have seen people using Vi and Vim interchangeably, even though they are not the same editors.

Sure, many of you might know that Vim is an improved version of Vi (that is so obvious as Vim reads for Vi Improved) but there are still many differences and scenarios where you might want to use Vi over Vim.

Vi vs Vim: Why the confusion?

The confusion between Vi and Vim starts from their shared history and overlapping functionality. Vi, short for Visual Editor, was introduced in 1976 as part of the Unix operating system. It became a standard text editor on Unix systems, renowned for its efficiency and minimalism.

Vim, on the other hand, stands for Vi IMproved, and was developed in 1991 as an enhanced version of Vi, offering additional features like syntax highlighting, plugin support, and advanced editing capabilities.

Adding to the confusion is a common practice among Linux distributions: many create an alias or symlink that maps vi to Vim by default. This means that when users type vi in a terminal, they are often unknowingly using Vim instead of the original Vi. As a result, many users are unaware of where Vi ends and Vim begins.

While both editors share the same core functionality, Vim extends Vi with numerous modern features that make it more versatile for contemporary workflows. For most users, this aliasing works in their favour since Vim’s expanded feature set is generally more useful.

However, it has also led to widespread misunderstanding about what distinguishes the two editors.

Key differences between Vi and Vim

Now, let's take a look at the key differences between Vi and Vim:

FeatureViVim
Undo LevelsSingle undoUnlimited undo and redo
Syntax HighlightingNot availableAvailable for multiple programming languages
Navigation in Insert ModeNot supported (requires exiting to command mode)Supported (arrow keys work in insert mode)
Plugins and ExtensibilityNot supportedSupports third-party plugins
Visual ModeNot availableAllows block selection and manipulation
Tabs and WindowsBasic single-file editingSupports tabs and split windows
Learning CurveSimpler due to fewer featuresSteeper due to additional functionality

Is anything still better about Vi?

While I was not sure if anything was still positive about Vi, when I talked to some sysadmins and power users, I came across some surprising points which prove that Vi is still relevant:

  • Minimalism: Vi’s simplicity makes it extremely lightweight on system resources. This can be advantageous on older hardware or when working in minimalistic environments.
  • Universality: As a default editor on all POSIX-compliant systems, Vi is guaranteed to be available without installation. This makes it a reliable fallback editor when working on constrained systems or during system recovery.
  • Consistency: Vi adheres strictly to its original design philosophy, avoiding potential quirks or bugs introduced by newer features in Vim.

Who should choose Vi?

You might wonder that based on the points I made, the userbase for Vi will be close to nothing but that is not true. I know multiple users who use Vi over anything modern.

Here are groups of people who can benefit from Vi:

  • System administrators on legacy systems: If you work on older Unix systems or environments where only basic tools are available, learning Vi is a dependable choice.
  • Minimalists: Those who value simplicity and prefer minimal resource usage may find Vi sufficient for their needs.

Who should choose Vim?

For most users, however, Vim is the better choice:

  • Learning the basics: Beginners aiming to understand core text-editing concepts might benefit from starting with Vim as the lack of features in Vi could be even more overwhelming.
  • Developers and programmers: With features like syntax highlighting, plugin support, and advanced navigation tools, Vim is ideal for coding tasks.
  • Power users: Those who require multilevel undo, visual mode for block selection, or split windows for multitasking will find Vim indispensable.
  • Cross-platform users: Vim’s availability across multiple platforms ensures a consistent experience regardless of the operating system.

In fact, unless you’re working in an environment where minimalism is critical or resources are highly constrained, you’re almost certainly better off using Vim. Its additional features make it far more versatile while still retaining the efficiency of its predecessor.

Start Learning Vim [Tutorial Series]
Start learning Vim by following these Vim tips for beginners and advanced users.

Vi vs Vim: which one should I use?

Conclusion

Vi and Vim cater to different needs despite their shared lineage. While Vi remains a lightweight, universal editor suitable for basic tasks or constrained environments, Vim extends its capabilities significantly, making it a powerful tool for modern development workflows.

The choice ultimately depends on your specific requirements—whether you value simplicity or need advanced functionality.

Which one do you use? Let us know in the comments.

by: LHB Community
Tue, 11 Feb 2025 15:57:18 +0530


As an alert Linux sysadmin, you may want to monitor web traffic for specific services. Here's why?

  • Telemetry detection: Some tools with sensitive user data go online when they shouldn't. Good examples are offline wallet or note-taking applications.
  • Application debugging when something goes wrong.
  • High traffic usage: 4G or 5G connections are usually limited, so it's better for the wallet to stay within the limits.

The situation becomes complicated on servers due to the popularity of containers, mostly Docker or LXC.

How to identify an application's traffic within this waterfall?

Httpat from Monastic Academy is a great solution for this purpose. It works without root access, you only need write access to /dev/net/tun to be able to work with TUN virtual device used for traffic interception.

Installing Httpat

The application is written in Go and the binary can be easily downloaded from the Github release page using these three commands one by one:

wget -c https://github.com/monasticacademy/httptap/releases/latest/download/httptap_linux_$(uname -m).tar.gz
tar xf httptap_linux_$(uname -m).tar.gz
sudo mv httptap /usr/bin && rm -rf httptap_linux_$(uname -m).tar.gz

Install with Go:

go install github.com/monasticacademy/httptap@latest

Another way is to check your Linux distribution repository for the httptap package. The Repology project is great to see which distributions currently have a Httptap package.

On Ubuntu 24.04 and later, the next AppArmor restrictions should be disabled:

sudo sysctl -w kernel.apparmor_restrict_unprivileged_unconfined=0
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0

Practical examples for common use-cases

For a quick start, let's load the website "linuxhandbook.com" using curl:

httptap -- curl -s -o /dev/null https://linuxhandbook.com

Looks great, it tells us that curl used 141714 bytes for a GET request with code 200, which is OK. We use -s -o /dev/null to prevent any output from the curl to see what Httptap does.

---> GET https://linuxhandbook.com/
<--- 200 https://linuxhandbook.com/ (141714 bytes)

Let's try google.com website, which use redirects:

httptap -- python -c "import requests; requests.get('https://google.com')"

---> GET https://google.com/
<--- 301 https://google.com/ (220 bytes)
decoding gzip content
---> GET https://www.google.com/
<--- 200 https://www.google.com/ (20721 bytes)

It works and notifies us about 301 redirects and archived content. Not bad at all.
Let's say we have a few instances in the Google Cloud, managed by the cli tool called gcloud. What HTTP endpoints does this command use? Let's take a look:

httptap -- gcloud compute instances list
---> POST https://oauth2.googleapis.com/token
<--- 200 https://oauth2.googleapis.com/token (997 bytes)
---> GET https://compute.googleapis.com/compute/v1/projects/maple-public-website/aggregated/instances?alt=json&includeAllScopes=True&maxResults=500&returnPartialSuccess=True
<--- 200 https://compute.googleapis.com/compute/v1/projects/maple-public-website/aggregated/instances?alt=json&includeAllScopes=True&maxResults=500&returnPartialSuccess=True (19921 bytes)

The answer is compute.googleapis.com.

OK, we have Dropbox storage and the rclone tool to manage it from the command line. What API endpoint uses Dropbox?

$ httptap -- rclone lsf dropbox:
decoding gzip content
---> POST https://api.dropboxapi.com/2/files/list_folder
<--- 200 https://api.dropboxapi.com/2/files/list_folder (2119 bytes)

The answer is loud and clear again: api.dropboxapi.com.
Let's play a bit with DoH - encrypted DNS, DNS-over-HTTPS. We will use Quad9, a famous DNS service which supports DoH via https://dns.quad9.net/dns-query endpoint.

$ httptap -- curl -sL --doh-url https://dns.quad9.net/dns-query https://linuxhandbook.com -o /dev/null
---> POST https://dns.quad9.net/dns-query
<--- 200 https://dns.quad9.net/dns-query (83 bytes)
---> POST https://dns.quad9.net/dns-query
<--- 200 https://dns.quad9.net/dns-query (119 bytes)
---> GET https://linuxhandbook.com/
<--- 200 https://linuxhandbook.com/ (141727 bytes)

Now we can see that it makes two POST requests to the Quad9 DoH endpoint, and one GET request to the target - linuxhandbook.com/ to check if it works correctly, all with success.
Let's take a look under the hood - print the payloads of the DNS-over-HTTPS requests with --head and --body flags:

./httptap --head --body -- curl -sL --doh-url https://dns.quad9.net/dns-query https://linuxhandbook.com -o /dev/null---> POST https://dns.quad9.net/dns-query
> Accept: */*
> Content-Type: application/dns-message
> Content-Length: 35
linuxhandbookcom
<--- 200 https://dns.quad9.net/dns-query (83 bytes)
< Content-Type: application/dns-message
< Cache-Control: max-age=300
< Content-Length: 83
< Server: h2o/dnsdist
< Date: Sun, 09 Feb 2025 15:43:37 GMT
linuxhandbookcom
                ,he
                   ,he
                      ,CI�
                          ---> POST https://dns.quad9.net/dns-query
> Accept: */*
> Content-Type: application/dns-message
> Content-Length: 35
linuxhandbookcom
<--- 200 https://dns.quad9.net/dns-query (119 bytes)
< Server: h2o/dnsdist
< Date: Sun, 09 Feb 2025 15:43:38 GMT
< Content-Type: application/dns-message
< Cache-Control: max-age=300
< Content-Length: 119
linuxhandbookcom
                ,&G CI�
                       ,&G he
                             ,&G he
---> GET https://linuxhandbook.com/
> User-Agent: curl/8.11.1
> Accept: */*
<--- 200 https://linuxhandbook.com/ (141742 bytes)
< Cache-Control: private, max-age=0, must-revalidate, no-cache, no-store
< Pagespeed: off
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Origin-Cache-Control: public, max-age=0
< Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=bAHIntCPfaGgoUwEwhk5QWPETFvnq5K9Iw60TGIAcnTisEfo%2BjKulz%2FJP7rTPgmyznVSc%2BSwIOKtajz%2BZTg71To4BuapDd%2BKdgyar%2FpIGT76XWH9%2FVNMyliYqgceD7DwuBmiPr3F77zxa7b6ty8J"}],"group":"cf-nel","max_age":604800}
< Server: cloudflare
< Cf-Ray: 90f4fa286f9970bc-WAW
< X-Middleton-Response: 200
< X-Powered-By: Express
< Cf-Cache-Status: DYNAMIC
< Alt-Svc: h3=":443"; ma=86400
< Date: Sun, 09 Feb 2025 15:43:48 GMT
< Display: orig_site_sol
< Expires: Sat, 08 Feb 2025 15:43:48 GMT
< Response: 200
< Set-Cookie: ezoictest=stable; Path=/; Domain=linuxhandbook.com; Expires=Sun, 09 Feb 2025 16:13:48 GMT; HttpOnly
< Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
< X-Middleton-Display: orig_site_sol
< Server-Timing: cfL4;desc="?proto=TCP&rtt=0&min_rtt=0&rtt_var=0&sent=0&recv=0&lost=0&retrans=0&sent_bytes=0&recv_bytes=0&delivery_rate=0&cwnd=0&unsent_bytes=0&cid=0a7f5fbffa6452d4&ts=351&x=0"
< Content-Type: text/html; charset=utf-8
< Vary: Accept-Encoding,User-Agent
< X-Ezoic-Cdn: Miss
< X-Sol: orig
< Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
<!DOCTYPE html><html lang="en" class="group/html min-h-screen has-inline-code-block





         has-gray-scale-Slate
    " data-prismjs-copy="Copy" data-prismjs-copy-error="Error" data-prismjs-copy-success="Copied"><head><meta charset="UTF-8"/>
...

Fantastic! Httptap just intercepted the HTTP headers thanks to the --head option and the payloads because the --body option was used.

HAR

To work more comfortably with HTTP requests and responses, Httptap supports HAR format:

httptap --dump-har out.har -- curl -Lso /dev/null https://linuxhandbook.com

There are many HAR viewer applications, let's open it in Google HAR Analyzer:

httptap-har

More useful Httptap options:

  • --no-new-user-namespace - run as root without user namespace.
  • --subnet and --gateway - subnet and gateway of network inteface visible for subprocess.
  • --dump-tcp - dump all TCP packets
    --http HTTP - list of TCP ports to intercept HTTPS traffic on (default: 80)
    --https HTTPS - list of TCP ports to intercept HTTP traffic on (default: 443)

Httptap runs the process in an isolated network namespace and also mounts an overlay filesystem for /etc/resolv.conf to make sure the correct DNS is used. The Linux namespace is a list of network interfaces and routing rules, and httptap uses it to not affect network traffic on the system.

It also injects a Certificate Authority to be able to decrypt HTTPS traffic. Httptap creates a TUN device and runs the subprocess in an environment where all network traffic is routed through this device, just like OpenVPN.

Httptap parses the IP packets, including inner TCP and UDP packets, and writes back raw IP packets using a software implementation of the TCP/IP protocol.

Advanced - modifying requests and responses

Currently there are no interface or command line options to do this, but it's possible with simple source code modification. Basic Go programming skills are required, of course.

The code that handles HTTP requests is here, and the code that handles responses is a few lines below that. So it's very easy to modify outgoing traffic in the same way as a normal GO HTTP request modification. Real expamples: modify or randomize application telemetry by inserting random data to make it less readable.

Conclusion

There are a few related tools that I find interesting and would like to share with you:

  • Wireshark - if you want to know what's going on your network interfaces, the real must-have tool.
  • OpenSnitch - interactive application firewall inspired by Little Snitch for macOS.
  • Douane - personal firewall that protects a user's privacy by allowing a user to control which applications can connect to the internet from their GNU/Linux computer.
  • Adnauseam - "clicking ads, so you don't have to".

I hope you enjoy using Httptap as much as I do 😄

✍️
Author Info: Paul is a Linux user since late 00s, FOSS advocate, always exploring new open-source technologies. Passionate about privacy, security, networks and community-driven development. You can find him on Mastodon.
by: Abhishek Kumar
Fri, 31 Jan 2025 17:03:02 +0530


I’ve been using Cloudflare Tunnel for over a year, and while it’s great for hosting static HTML content securely, it has its limitations.

For instance, if you’re running something like Jellyfin, you might run into issues with bandwidth limits, which can lead to account bans due to their terms of service.

Cloudflare Tunnel is designed with lightweight use cases in mind, but what if you need something more robust and self-hosted?

Let me introduce you to some fantastic open-source alternatives that can give you the freedom to host your services without restrictions.

1. ngrok (OSS Edition)

ngrok github repo

ngrok is a globally distributed reverse proxy designed to secure, protect, and accelerate your applications and network services, regardless of where they are hosted.

Acting as the front door to your applications, ngrok integrates a reverse proxy, firewall, API gateway, and global load balancing into one seamless solution.

Although the original open-source version of ngrok (v1) is no longer maintained, the platform continues to contribute to the open-source ecosystem with tools like Kubernetes operators and SDKs for popular programming languages such as Python, JavaScript, Go, Rust, and Java.

Key features:

  • Securely connect APIs and databases across networks without complex configurations.
  • Expose local applications to the internet for demos and testing without deployment.
  • Simplify development by inspecting and replaying HTTP callback requests.
  • Implement advanced traffic policies like rate limiting and authentication with a global gateway-as-a-service.
  • Control device APIs securely from the cloud using ngrok on IoT devices.
  • Capture, inspect, and replay traffic to debug and optimize web services.
  • Includes SDKs and integrations for popular programming languages to streamline workflows.

2. frp (Fast Reverse Proxy)

frp github repo

frp (Fast Reverse Proxy) is a high-performance tool designed to expose local servers located behind NAT or firewalls to the internet.

Supporting protocols like TCP, UDP, HTTP, and HTTPS, frp enables seamless request forwarding to internal services via custom domain names.

It also includes a peer-to-peer (P2P) connection mode for direct communication, making it a versatile solution for developers and system administrators.

Key features:

  • Expose local servers securely, even behind NAT or firewalls, using TCP, UDP, HTTP, or HTTPS protocols.
  • Provide token and OIDC authentication for secure connections.
  • Support advanced configurations such as encryption, compression, and TLS for enhanced security.
  • Enable efficient traffic handling with features like TCP stream multiplexing, QUIC protocol support, and connection pooling.
  • Facilitate monitoring and management through a server dashboard, client admin UI, and Prometheus integration.
  • Offer flexible routing options, including URL routing, custom subdomains, and HTTP header rewriting.
  • Implement load balancing and service health checks for reliable performance.
  • Allow for port reuse, port range mapping, and bandwidth limits for granular control.
  • Simplify SSH tunneling with a built-in SSH Tunnel Gateway.

3. localtunnel

localtunnel website homepage

Localtunnel is an open-source, self-hosted tool that simplifies the process of exposing local web services to the internet.

By creating a secure tunnel, Localtunnel allows developers to share their local resources without needing to configure DNS or firewall settings.

It’s built on Node.js and can be easily installed using npm.

While Localtunnel is straightforward and effective, the project hasn't seen active maintenance since 2022, and the default Localtunnel.me server's long-term reliability is uncertain.

However, you can host your own Localtunnel server for better control and scalability.

Key features

  • Secure HTTPS for all tunnels, ensuring safe connections.
  • Share your local development environment with a unique, publicly accessible URL.
  • Test webhooks and external API callbacks with ease.
  • Integrate with cloud-based browser testing tools for UI testing.
  • Restart your local server seamlessly, as Localtunnel automatically reconnects.
  • Request a custom subdomain or proxy to a hostname other than localhost for added flexibility.

4. boringproxy

boringproxy is a reverse proxy and tunnel manager designed to simplify the process of securely exposing self-hosted web services to the internet.

Whether you're running a personal website, Nextcloud, Jellyfin, or other services behind a NAT or firewall, boringproxy handles all the complexities, including HTTPS certificate management and NAT traversal, without requiring port forwarding or extensive configuration.

It’s built with self-hosters in mind, offering a simple, fast, and secure solution for remote access.

Key features

  • 100% free and open source under the MIT license, ensuring transparency and flexibility.
  • No configuration files required—boringproxy works with sensible defaults and simple CLI parameters for easy adjustments.
  • No need for port forwarding, NAT traversal, or firewall rule configuration, as boringproxy handles it all.
  • End-to-end encryption with optional TLS termination at the server, client, or application, integrated seamlessly with Let's Encrypt.
  • Fast web GUI for managing tunnels, which works great on both desktop and mobile browsers.
  • Fully configurable through an HTTP API, allowing for automation and integration with other tools.
  • Cross-platform support on Linux, Windows, Mac, and ARM devices (e.g., Raspberry Pi and Android).
  • SSH support for those who prefer using a standard SSH client for tunnel management.

5. zrok

zrok logo

zrok is a next-generation, peer-to-peer sharing platform built on OpenZiti, a programmable zero-trust network overlay.

It enables users to share resources securely, both publicly and privately, without altering firewall or network configurations.

Designed for technical users, zrok provides frictionless sharing of HTTP, TCP, and UDP resources, along with files and custom content.

  • Share resources with non-zrok users over the public internet or directly with other zrok users in a peer-to-peer manner.
  • Works seamlessly on Windows, macOS, and Linux systems.
  • Start sharing within minutes using the zrok.io service. Download the binary, create an account, enable your environment, and share with a single command.
  • Easily expose local resources like localhost:8080 to public users without compromising security.
  • Share "network drives" publicly or privately and mount them on end-user systems for easy access.
  • Integrate zrok’s sharing capabilities into your applications with the Go SDK, which supports net.Conn and net.Listener for familiar development workflows.
  • Deploy zrok on a Raspberry Pi or scale it for large service instances. The single binary contains everything needed to operate your own zrok environment.
  • Leverages OpenZiti’s zero-trust principles for secure and programmable network overlays.

6. Pagekite

pagekite website homepage

PageKite is a veteran in the tunneling space, providing HTTP(S) and TCP tunnels for more than 14 years. It offers features like IP whitelisting, password authentication, and supports custom domains.

While the project is completely open-source and written in Python, the public service imposes limits, such as bandwidth caps, to prevent abuse.

Users can unlock additional features and higher bandwidth through affordable payment plans.

The free tier provides 2 GB of monthly transfer quota and supports custom domains, making it accessible for personal and small-scale use.

Key features

  • Enables any computer, such as a Raspberry Pi, laptop, or even old cell phones, to act as a server for hosting services like WordPress, Nextcloud, or Mastodon while keeping your home IP hidden.
  • Provides simplified SSH access to mobile or virtual machines and ensures privacy by keeping firewall ports closed.
  • Supports embedded developers with features like naming and accessing devices in the field, secure communications via TLS, and scaling solutions for both lightweight and large-scale deployments.
  • Offers web developers the ability to test and debug work remotely, interact with secure APIs, and run webhooks, API servers, or Git repositories directly from their systems.
  • Utilizes a global relay network to ensure low latency, high availability, and redundancy, with infrastructure managed since 2010.
  • Ensures privacy by routing all traffic through its relays, hiding your IP address, and supporting both end-to-end and wildcard TLS encryption.

7. Chisel

chisel github repo

Chisel is a fast and efficient TCP/UDP tunneling tool transported over HTTP and secured using SSH.

Written in Go (Golang), Chisel is designed to bypass firewalls and provide a secure endpoint into your network.

It is distributed as a single executable that functions as both client and server, making it easy to set up and use.

Key features

  • Offers a simple setup process with a single executable for both client and server functionality.
  • Secures connections using SSH encryption and supports authenticated client and server connections through user configuration files and fingerprint matching.
  • Automatically reconnects clients with exponential backoff, ensuring reliability in unstable networks.
  • Allows clients to create multiple tunnel endpoints over a single TCP connection, reducing overhead and complexity.
  • Supports reverse port forwarding, enabling connections to pass through the server and exit via the client.
  • Provides optional SOCKS5 support for both clients and servers, offering additional flexibility in routing traffic.
  • Enables tunneling through SOCKS or HTTP CONNECT proxies and supports SSH over HTTP using ssh -o ProxyCommand.
  • Performs efficiently, making it suitable for high-performance requirements.

8. Telebit

telebit website homepage

Telebit has quickly become one of my favorite tunneling tools, and it’s easy to see why. It's still fairly new but does a great job of getting things done.

By installing Telebit Remote on any device, be it your laptop, Raspberry Pi, or another device, you can easily access it from anywhere.

The magic happens thanks to a relay system that allows multiplexed incoming connections on any external port, making remote access a breeze.

Not only that, but it also lets you share files and configure it like a VPN.

Key features

  • Share files securely between devices
  • Access your Raspberry Pi or other devices from behind a firewall
  • Use it like a VPN for additional privacy and control
  • SSH over HTTPS, even on networks with restricted ports
  • Simple setup with clear documentation and an installer script that handles everything

9. tunnel.pyjam.as

tunnel.pyjam.as website homepage

As a web developer, one of my favorite tools for quickly sharing projects with clients is tunnel.pyjam.as.

It allows you to set up SSL-terminated, ephemeral HTTP tunnels to your local machine without needing to install any custom software, thanks to Wireguard.

It’s perfect for when you want to quickly show someone a project you’re working on without the hassle of complex configurations.

Key features

  • No software installation required, thanks to Wireguard
  • Quickly set up a reverse proxy to share your local services
  • SSL-terminated tunnels for secure connections
  • Simple to use with just a curl command to start and stop tunnels
  • Ideal for quick demos or temporary access to local projects

Final thoughts

When it comes to tunneling tools, there’s no shortage of options and each of the projects we’ve discussed here offers something unique.

Personally, I’m too deeply invested in Cloudflare Tunnel to stop using it anytime soon. It’s become a key part of my workflow, and I rely on it for many of my use cases.

However, that doesn’t mean I won’t continue exploring these open-source alternatives. I’m always excited to see how they evolve.

For instance, with tunnel.pyjam.as, I find it incredibly time-saving to simply edit the tunnel.conf file and run its WireGuard instance to quickly share my projects with clients.

I’d love to hear what you think! Have you tried any of these open-source tunneling tools, or do you have your own favorites? Let me know in the comments.

by: Abhishek Prakash
Wed, 29 Jan 2025 20:04:25 +0530


What's in a name? Sometimes the name can be deceptive.

For example, in the Linux Tips and Tutorials section of this newsletter, I am sharing a few commands that have nothing to do with what their name indicates 😄

Here are the other highlights of this edition of LHB Linux Digest:

  • Nice and renice commands
  • ReplicaSet in Kubernetes
  • Self hosted code snippet manager
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by RELIANOID.

❇️Comprehensive Load Balancing Solutions For Modern Networks

RELIANOID’s load balancing solutions combine the power of SD-WAN, secure application delivery, and elastic load balancing to optimize traffic distribution and ensure unparalleled performance.

With features like a robust Web Application Firewall (WAF) and built-in DDoS protection, your applications remain secure and resilient against cyber threats. High availability ensures uninterrupted access, while open networking and user experience networking enhance flexibility and deliver a seamless experience across all environments, from on-premises to cloud.

Free Load Balancer Download | Community Edition by RELIANOID
Discover our Free Load Balancer | Community Edition | The best Open Source Load Balancing software for providing high availability and content switching services

📖 Linux Tips and Tutorials

Using nice and renice commands to change process priority.

Change Process Priority WIth nice and renice Commands
You can modify if a certain process should get priority in consuming CPU with nice and renice commands.
by: LHB Community
Wed, 29 Jan 2025 18:26:26 +0530


Kubernetes is a powerful container orchestration platform that enables developers to manage and deploy containerized applications with ease. One of its key components is the ReplicaSet, which plays a critical role in ensuring high availability and scalability of applications.

In this guide, we will explore the ReplicaSet, its purpose, and how to create and manage it effectively in your Kubernetes environment.

What is a ReplicaSet in Kubernetes?

A ReplicaSet in Kubernetes is a higher-level abstraction that ensures a specified number of pod replicas are running at all times. If a pod crashes or becomes unresponsive, the ReplicaSet automatically creates a new pod to maintain the desired state. This guarantees high availability and resilience for your applications.

The key purposes of a ReplicaSet include:

  • Scaling Pods: ReplicaSets manage the replication of pods, ensuring the desired number of replicas are always running.
  • High Availability: Ensures that your application remains available even if one or more pods fail.
  • Self-Healing: Automatically replaces failed pods to maintain the desired state.
  • Efficient Workload Management: Helps distribute workloads across nodes in the cluster.

How Does a ReplicaSet Work?

ReplicaSets rely on selectors to match pods using labels. It uses these selectors to monitor the pods and ensures the actual number of pods matches the specified replica count. If the number is less than the desired count, new pods are created. If it’s greater, excess pods are terminated.

Creating a ReplicaSet

To create a ReplicaSet, you define its configuration in a YAML file. Here’s an example:

Example YAML Configuration

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

In this YAML file:

  • replicas: Specifies the desired number of pod replicas.
  • selector: Matches pods with the label app=nginx.
  • template: Defines the pod’s specifications, including the container image and port.

Deploying a ReplicaSet

Once you have the YAML file ready, follow these steps to deploy it in your Kubernetes cluster.

Apply the YAML configuration to create the ReplicaSet:

kubectl apply -f nginx-replicaset.yaml

Verify that the ReplicaSet was created and the pods are running:

kubectl get replicaset

Output:

NAME                DESIRED   CURRENT   READY   AGE
nginx-replicaset    3         3         3       5s

View the pods to check the pods created by the ReplicaSet:

kubectl get pods

Output:

NAME                      READY   STATUS    RESTARTS   AGE
nginx-replicaset-xyz12    1/1     Running   0          10s
nginx-replicaset-abc34    1/1     Running   0          10s
nginx-replicaset-lmn56    1/1     Running   0          10s

Scaling a ReplicaSet

You can easily scale the number of replicas in a ReplicaSet. For example, to scale the above ReplicaSet to 5 replicas:

kubectl scale replicaset nginx-replicaset --replicas=5

Verify the updated state:

kubectl get replicaset

Output:

NAME                DESIRED   CURRENT   READY   AGE
nginx-replicaset    5         5         5       2m
Learn Kubernetes Operator
Learn to build, test and deploy Kubernetes Opeartor using Kubebuilder as well as Operator SDK in this course.

Conclusion

A ReplicaSet is an essential component of Kubernetes, ensuring the desired number of pod replicas are running at all times. By leveraging ReplicaSets, you can achieve high availability, scalability, and self-healing for your applications with ease.

Whether you’re managing a small application or a large-scale deployment, understanding ReplicaSets is crucial for effective workload management.

✍️
Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
by: Satoshi Nakamoto
Wed, 29 Jan 2025 16:53:22 +0530


A few years ago, we witnessed a shift to containers and in current times, I believe containers have become an integral part of the IT infrastructure for most companies.

Traditional monitoring tools often fall short in providing the visibility needed to ensure performance, security, and reliability.

According to my experience, monitoring resource allocation is the most important part of deploying containers and that is why I found the top container monitoring solutions offering real-time insights into your containerized environments.

Top Container Monitoring Solutions

Before I jump into details, here's a brief of all the tools which I'll be discussing in a moment:

Tool Pricing & Plans Free Tier? Key Free Tier Features Key Paid Plan Features
Middleware Free up to 100GB; pay-as-you-go at $0.3/GB; custom enterprise plans Yes Up to 100GB data, 1k RUM sessions, 20k synthetic checks, 14-day retention Unlimited data volume; data pipeline & ingestion control; single sign-on; dedicated support
Datadog Free plan (limited hosts & 1-day metric retention); Pro starts at $15/host/month; Enterprise from $23 Yes Basic infrastructure monitoring for up to 5 hosts; limited metric retention Extended retention, advanced anomaly detection, over 750 integrations, multi-cloud support
Prometheus & Grafana Open-source; no licensing costs Yes Full-featured metrics collection (Prometheus), custom dashboards (Grafana) Self-managed support only; optional managed services through third-party providers
Dynatrace 15-day free trial; usage-based: $0.04/hour for infrastructure-only, $0.08/hour for full-stack Trial Only N/A (trial only) AI-driven root cause analysis, automatic topology discovery, enterprise support, multi-cloud observability
Sematext Free plan (Basic) with limited container monitoring; paid plans start at $0.007/container/hour Yes Live metrics for a small number of containers, 30-minute retention, limited alert rules Increased container limits, extended retention, unlimited alert rules, full-stack monitoring
Sysdig Free tier; Sysdig Monitor starts at $20/host/month; Sysdig Secure is $60/host/month Yes Basic container monitoring, limited metrics and retention Advanced threat detection, vulnerability management, compliance checks, Prometheus support
SolarWinds No permanent free plan; pricing varies by module (starts around $27.50/month or $2995 single license) Trial Only N/A (trial only) Pre-built Docker templates, application-centric mapping, hardware health, synthetic monitoring
Splunk Observability Cloud starts at $15/host/month (annual billing); free trial available Trial Only N/A (trial only) Real-time log and metrics analysis, AI-based anomaly detection, multi-cloud integrations, advanced alerting
MetricFire Paid plans start at $19/month; free trial offered Trial Only N/A (trial only) Integration with Graphite and Prometheus, customizable dashboards, real-time alerts
SigNoz Open-source (self-hosted) or custom paid support Yes Full observability stack (metrics, traces, logs) with no licensing costs Commercial support, managed hosting services, extended retention options

Here, "N/A (trial only)" means that the tool does not offer a permanent free tier but provides a limited-time free trial for users to test its features. After the trial period ends, users must subscribe to a paid plan to continue using the tool. Essentially, there is no free version available for long-term use—only a temporary trial.

1. Middleware

Middleware

Middleware is an excellent choice for teams looking for a free or scalable container monitoring solution. It provides pre-configured dashboards for Kubernetes environments and real-time visibility into container health.

With a free tier supporting up to 100GB of data and a pay-as-you-go model at $0.3/GB thereafter, it’s ideal for startups or small teams.

Key features:

  • Pre-configured dashboards for Kubernetes
  • Real-time metrics tracking
  • Alerts for critical events
  • Correlation of metrics with logs and traces

Pros:

  • Free tier available
  • Easy setup with minimal configuration
  • Scalable pricing model

Cons:

  • Limited advanced features compared to premium tools

2. Datadog

datalog

Datadog is a premium solution offering observability across infrastructure, applications, and logs. Its auto-discovery feature makes it particularly suited for dynamic containerized environments.

The free plan supports up to five hosts with limited retention. Paid plans start at $15 per host per month.

Key features:

  • Real-time performance tracking
  • Anomaly detection using ML
  • Auto-discovery of new containers
  • Distributed tracing and APM

Pros:

  • Extensive integrations (750+)
  • User-friendly interface
  • Advanced visualization tools

Cons:

  • High cost for small teams
  • Pricing can vary based on usage spikes

3. Prometheus & Grafana

Prometheus & Grafana

This open-source duo provides powerful monitoring and visualization capabilities. Prometheus has an edge in metrics collection with its PromQL query language, while Grafana offers stunning visualizations.

This eventually makes it perfect for teams seeking customization without licensing costs.

Key features:

  • Time-series data collection
  • Flexible query language (PromQL)
  • Customizable dashboards
  • Integrated alerting system

Pros:

  • Free to use
  • Highly customizable
  • Strong community support

Cons:

  • Requires significant setup effort
  • Limited out-of-the-box functionality

4. Dynatrace

Dynatrace

Dynatrace is an AI-powered observability platform designed for large-scale hybrid environments. It automates topology discovery and offers you deep insights into containerized workloads. Pricing starts at $0.04/hour for infrastructure-only monitoring.

Key features:

  • AI-powered root cause analysis
  • Automatic topology mapping
  • Real-user monitoring
  • Cloud-native support (Kubernetes/OpenShift)

Pros:

  • Automated configuration
  • Scalability for large environments
  • End-to-end visibility

Cons:

  • Expensive for smaller teams
  • Proprietary platform limits flexibility

5. Sematext

Sematext

Sematext is a lightweight tool that allows users to monitor metrics and logs across Docker and Kubernetes platforms. Its free plan supports basic container monitoring with limited retention and alerting rules. Paid plans start at just $0.007/container/hour.

Key features:

  • Unified dashboard for logs and metrics
  • Real-time insights into containers and hosts
  • Auto-discovery of new containers
  • Anomaly detection and alerting

Pros:

  • Affordable pricing plans
  • Simple setup process
  • Full-stack observability features

Cons:

  • Limited advanced features compared to premium tools

7. SolarWinds

SolarWinds

SolarWinds offers an intuitive solution for SMBs needing straightforward container monitoring. While it doesn’t offer a permanent free plan, its pricing starts at around $27.50/month or $2995 as a one-time license fee.

Key features:

  • Pre-built Docker templates
  • Application-centric performance tracking
  • Hardware health monitoring
  • Dependency mapping

Pros:

  • Easy deployment and setup
  • Out-of-the-box templates
  • Suitable for smaller teams

Cons:

  • Limited flexibility compared to open-source tools

8. Splunk

Splunk

Splunk not only provides log analysis but also provides strong container monitoring capabilities through its Observability Cloud suite. Pricing starts at $15/host/month on annual billing.

Key features:

  • Real-time log and metrics analysis
  • AI-based anomaly detection
  • Customizable dashboards and alerts
  • Integration with OpenTelemetry standards

Pros:

  • Powerful search capabilities
  • Scalable architecture
  • Extensive integrations

Cons:

  • High licensing costs for large-scale deployments

9. MetricFire

MetricFire

It simplifies container monitoring by offering customizable dashboards and seamless integration with Kubernetes and Docker. MetricFire is ideal for teams looking for a reliable hosted solution without the hassle of managing infrastructure. Pricing starts at $19/month.

Key features:

  • Hosted Graphite and Grafana dashboards
  • Real-time performance metrics
  • Integration with Kubernetes and Docker
  • Customizable alerting systems

Pros:

  • Easy setup and configuration
  • Scales effortlessly as metrics grow
  • Transparent pricing model
  • Strong community support

Cons:

  • Limited advanced features compared to proprietary tools
  • Requires technical expertise for full customization

10. SigNoz

SigNoz

SigNoz is an open-source alternative to proprietary APM tools like Datadog and New Relic. It offers a platform for logs, metrics, and traces under one interface.

With native OpenTelemetry support and a focus on distributed tracing for microservices architectures, SigNoz is perfect for organizations seeking cost-effective yet powerful observability solutions.

Key features:

  • Distributed tracing for microservices
  • Real-time metrics collection
  • Centralized log management
  • Customizable dashboards
  • Native OpenTelemetry support

Pros:

  • Completely free if self-hosted
  • Active development community
  • Cost-effective managed cloud option
  • Comprehensive observability stack

Cons:

  • Requires infrastructure setup if self-hosted
  • Limited enterprise-level support compared to proprietary tools

Evaluate your infrastructure complexity and budget to select the best tool that aligns with your goals!

by: Abhishek Kumar
Thu, 23 Jan 2025 11:22:15 +0530


Imagine this: You’ve deployed a handful of Docker containers to power your favorite applications, maybe a self-hosted Nextcloud for your files, a Pi-hole for ad-blocking, or even a media server like Jellyfin.

Everything is running like a charm, but then you hit a common snag: keeping those containers updated.

When a new image is released, you’ll need to manually pull it, stop the running container, recreate it with the updated image, and hope everything works as expected.

Multiply that by the number of containers you’re running, and it’s clear how this quickly becomes a tedious and time-consuming chore.

But there’s more at stake than just convenience. Skipping updates or delaying them for too long can lead to outdated software running in your containers, which often means unpatched vulnerabilities.

These can become a serious security risk, especially if you’re hosting services exposed to the internet.

This is where Watchtower steps in, a tool designed to take the hassle out of container updates by automating the entire process.

Whether you’re running a homelab or managing a production environment, Watchtower ensures your containers are always up-to-date and secure, all with minimal effort on your part.

What is Watchtower?

Watchtower is an open-source tool that automatically monitors your Docker containers and updates them whenever a new version of their image is available.

It keeps your setup up-to-date, saving time and reducing the risk of running outdated containers.

But it’s not just a "set it and forget it" solution, it’s also highly customizable, allowing you to tailor its behavior to fit your workflow.

Whether you prefer full automation or staying in control of updates, Watchtower has you covered.

How does it work?

Watchtower works by periodically checking for updates to the images of your running containers. When it detects a newer version, it pulls the updated image, stops the current container, and starts a new one using the updated image.

The best part? It maintains your existing container configuration, including port bindings, volume mounts, and environment variables.

If your containers depend on each other, Watchtower handles the update process in the correct order to avoid downtime.

Deploying watchtower

Since you’re reading this article, I’ll assume you already have some sort of homelab or Docker setup where you want to automate container updates. That means I won’t be covering Docker installation here.

When it comes to deploying Watchtower, it can be done in two ways:

Docker run

If you’re just trying it out or want a straightforward deployment, you can run the following command:

docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower

This will spin up a Watchtower container that monitors your running containers and updates them automatically.

But here’s the thing, I’m not a fan of the docker run command.

It’s quick, sure, but I prefer stack approach rather than cramming everything into a single command.

Docker compose

If you facny using Docker Compose to run Watchtower, here’s a minimal configuration that replicates the docker run command above:

version: "3.8"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

To start Watchtower using this configuration, save it as docker-compose.yml and run:

docker-compose up -d

This will give you the same functionality as the docker run command, but in a cleaner, more manageable format.

Customizing watchtower with environment variables

Running Watchtower plainly is all good, but we can make it even better with environment variables and command arguments.

Personally, I don’t like giving full autonomy to one service to automatically make changes on my behalf.

Since I have a pretty decent homelab running crucial containers, I prefer using Watchtower to notify me about updates rather than updating everything automatically.

This ensures that I remain in control, especially for containers that are finicky or require a perfect pairing with their databases.

Sneak peak into my homelab

Take a look at my homelab setup: it’s mostly CMS containers for myself and for clients, and some of them can behave unpredictably if not updated carefully.

So instead of letting Watchtower update everything, I configure it to provide insights and alerts, and then I manually decide which updates to apply.

To achieve this, we’ll add the following environment variables to our Docker Compose file:

Environment Variable Description
WATCHTOWER_CLEANUP Removes old images after updates, keeping your Docker host clean.
WATCHTOWER_POLL_INTERVAL Sets how often Watchtower checks for updates (in seconds). One hour (3600 seconds) is a good balance.
WATCHTOWER_LABEL_ENABLE Updates only containers with specific labels, giving you granular control.
WATCHTOWER_DEBUG Enables detailed logs, which can be invaluable for troubleshooting.
WATCHTOWER_NOTIFICATIONS Configures the notification method (e.g., email) to keep you informed about updates.
WATCHTOWER_NOTIFICATION_EMAIL_FROM The email address from which notifications will be sent.
WATCHTOWER_NOTIFICATION_EMAIL_TO The recipient email address for update notifications.
WATCHTOWER_NOTIFICATION_EMAIL_SERVER SMTP server address for sending notifications.
WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT Port used by the SMTP server (commonly 587 for TLS).
WATCHTOWER_NOTIFICATION_EMAIL_USERNAME SMTP server username for authentication.
WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD SMTP server password for authentication.

Here’s how the updated docker-compose.yml file would look:

version: "3.8"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: always
    environment:
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_POLL_INTERVAL: "3600"
      WATCHTOWER_LABEL_ENABLE: "true"
      WATCHTOWER_DEBUG: "true"
      WATCHTOWER_NOTIFICATIONS: "email"
      WATCHTOWER_NOTIFICATION_EMAIL_FROM: "admin@example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_TO: "notify@example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "smtp.example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: "587"
      WATCHTOWER_NOTIFICATION_EMAIL_USERNAME: "your_email_username"
      WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD: "your_email_password"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
I like to put my credentials in a separate environment file.

Once you run the Watchtower container for the first time, you'll receive an initial email confirming that the service is up and running.

Here's an example of what that email might look like:

After some time, as Watchtower analyzes your setup and scans the running containers, it will notify you if it detects any updates available for your containers.

These notifications are sent in real-time and look something like this:

This feature ensures you're always in the loop about potential updates without having to check manually.

Final thoughts

I’m really impressed by Watchtower and have been using it for a month now.

I recommend, if possible, to play around with it in an isolated environment first, that’s what I did before deploying it in my homelab.

The email notification feature is great, but my inbox now looks totally filled with Watchtower emails, so I might create a rule to manage them better. Overall, no complaints so far! I find it better than the Docker Compose method we discussed earlier.

Updating Docker Containers With Zero Downtime
A step by step methodology that can be very helpful in your day to day DevOps activities without sacrificing invaluable uptime.

What about you? What do you use to update your containers?

If you’ve tried Watchtower, share your experience, anything I should be mindful of?

Let us know in the comments!

Blogger

pwd command in Linux

by: Satoshi Nakamoto
Sat, 18 Jan 2025 10:27:48 +0530


The pwd command in Linux, short for Print Working Directory, displays the absolute path of the current directory, helping users navigate the file system efficiently.

It is one of the first commands you use when you start learning Linux. And if you are absolutely new, take advantage of this free course:

Learn the Basic Linux Commands in an Hour [With Videos]
Learn the basics of Linux commands in this crash course.

pwd command syntax

Like other Linux commands, pwd also follows this syntax.

pwd [OPTIONS]

Here, you have [OPTIONS], which are used to modify the default behavior of the pwd command. If you don't use any options with the pwd command, it will show the physical path of the current working directory by default.

Unlike many other Linux commands, pwd does not come with many flags and has only two important flags:

Option Description
-L Displays the logical current working directory, including symbolic links.
-P Displays the physical current working directory, resolving symbolic links.
--help Displays help information about the pwd command.
--version Outputs version information of the pwd command.

Now, let's take a look at the practical examples of the pwd command.

1. Display the current location

This is what the pwd command is famous for, giving you the name of the directory where you are located or from where you are running the command.

pwd
Display the current working directory

If you want to display logical paths and symbolic links, all you have to do is execute the pwd command with the -L flag as shown here:

pwd -L

To showcase its usage, I will need to go through multiple steps so stay with me. First, go to the tmp directory using the cd command as shown here:

cd /tmp

Now, let's create a symbolic link which is pointing to the /var/log directory:

ln -s /var/log log_link

Finally, change your directory to log_link and use the pwd command with the -L flag:

pwd -L
Display the logical path including symbolic links

In the above steps, I went to the /tmp directory and then created a symbolic link which points to a specific location (/var/log) and then I used the pwd command and it successfully showed me the symbolic link.

The pwd command is one of the ways to resolve symbolic links. Meaning, you'll see the destination directory where soft link points to.

Use the -P flag:

pwd -P

I am going to use the symbolic link which I had created in the 2nd example. Here's what I did:

  • Navigate to /tmp.
  • Create a symbolic link (log_link) pointing to /var/log.
  • Change into the symbolic link (cd log_link)

Once you perform all the steps, you can check the real path of the symbolic link:

pwd -P
Follow symbolic link using the pwd command

4. Use pwd command in shell scripts

To get the current location in a bash shell script, you can store the value of the pwd command in a variable and later on print it as shown here:

current_dir=$(pwd)
echo "You are in $current_dir"

Now, if you execute this shell script in your home directory like I did, you will get similar output to mine:

Use the pwd command in the shell script

Bonus: Know the previous working directory

This is not exactly the use of the pwd command but it is somewhat related and interesting. There is an environment variable in Linux called OLDPWD which stores the previous working directory path.

This means you can get the previous working directory by printing the value of this environment variable:

echo "$OLDPWD"
know the previous working directory

Conclusion

This was a quick tutorial on how you can use the pwd command in Linux where I went through syntax, options, and some practical examples of it.

I hope you will find them helpful. If you have any queries or suggestions, leave us a comment.

by: Abhishek Prakash
Wed, 15 Jan 2025 18:28:50 +0530


This is the first newsletter of the year 2025. I hope expanding your Linux knowledge is one of your New Year's resolution, too. I am looking to learn and use Ansible in homelab setup. What's yours?

The focus of Linux Handbook in 2025 will be on self-hosting. You'll see more tutorials and articles on open source software you can self host on your cloud server or your home lab.

Of course, we'll continue to create new content on Kubernetes, Terraform, Ansible and other DevOps tools.

Here are the other highlights of this edition of LHB Linux Digest:

  • Extraterm terminal
  • File descriptors
  • Self hosting mailing list manager
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.

As a tech-enthusiast content creator, I'm always on the lookout for innovative ways to connect with my audience and share my passion for technology and self-sufficiency.

But as my newsletter grew in popularity, I found myself struggling with the financial burden of relying on external services like Mailgun - a problem many creators face when trying to scale their outreach efforts without sacrificing quality.

That's when I discovered Listmonk, a free and open-source mailing list manager that not only promises high performance but also gives me complete control over my data.

In this article, I'll walk you through how I successfully installed and deployed Listmonk locally using Docker, sharing my experiences and lessons learned along the way.

I used Linode's cloud server to test the scenario. You may try either of Linode or DigitalOcean or your own servers.

Customer Referral Landing Page - $100
Cut Your Cloud Bills in Half Deploy more with Linux virtual machines, global infrastructure, and simple pricing. No surprise bills, no lock-in, and the

Get started on Linode with a $100, 60-day credit for new users.

DigitalOcean – The developer cloud
Helping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.

Get started on DigitalOcean with a $100, 60-day credit for new users.

Prerequisites

Before diving into the setup process, make sure you have the following:

  • Docker and Docker Compose installed on your server.
  • A custom domain that you want to use for Listmonk.
  • Basic knowledge of shell commands and editing configuration files.

If you are absolutely new to Docker, we have a course just for you:

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Step 1: Set up the project directory

The first thing you need to do is create the directory where you'll store all the necessary files for Listmonk, I like an organized setup (helps in troubleshooting).

In your terminal, run:

mkdir listmonk
cd listmonk
creating listmonk directory

This will set up a dedicated directory for Listmonk’s files.

Step 2: Create the Docker compose file

Listmonk has made it incredibly easy to get started with Docker. Their official documentation provides a detailed guide and even a sample docker-compose.yml file to help you get up and running quickly.

Download the sample file to the current directory:

curl -LO https://github.com/knadh/listmonk/raw/master/docker-compose.yml
downloading sample docker-compose.yml file from listmonk

Here is the sample docker-compose.yml file, I tweaked some default environment variables:

💡
It's crucial to keep your credentials safe! Store them in a separate .env file, not hardcoded in your docker-compose.yml. I know, I know, I did it for this tutorial... but you're smarter than that, right? 😉
editing the environment variables in sample docker-compose.yml

For most users, this setup should be sufficient but you can always tweak settings to your own needs.

then run the container in the background:

docker compose up -d
running listmonk containers

Once you've run these commands, you can access Listmonk by navigating to http://localhost:9000 in your browser.

Setting up SSL

By default, Listmonk runs over HTTP and doesn’t include built-in SSL support. It is kinda important if you are running any service these days. So the next thing we need to do is to set up SSL support.

While I personally prefer using Cloudflare Tunnels for SSL and remote access, this tutorial will focus on Caddy for its straightforward integration with Docker.

Start by creating a folder named caddy in the same directory as your docker-compose.yml file:

mkdir caddy

Inside the caddy folder, create a file named Caddyfile with the following content:th the following contents:

listmonk.example.com {
    reverse_proxy app:9000
}

Replace listmonk.example.com with your actual domain name. This tells Caddy to proxy requests from your domain to the Listmonk service running on port 9000.

creating caddyfile

Ensure your domain is correctly configured in DNS. Add an A record pointing to your server's IP address (in my case, the Linode server's IP).

If you’re using Cloudflare, set the proxy status to DNS only during the initial setup to let Caddy handle SSL certificates.

creating a dns record for listmonk

Next, add the Caddy service to your docker-compose.yml file. Here’s the configuration to include:

  caddy:
    image: caddy:latest
    restart: unless-stopped
    container_name: caddy
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile
      - ./caddy/caddy_data:/data
      - ./caddy/caddy_config:/config
    networks:
      - listmonk
adding caddy service in docker-compose file

This configuration sets up Caddy to handle HTTP (port 80) and HTTPS (port 443) traffic, automatically obtain SSL certificates, and reverse proxy requests to the Listmonk container.

Finally, restart your containers to apply the new settings:

docker-compose restart

Once the containers are up and running, navigate to your domain (e.g., https://listmonk.example.com) in a browser.

Caddy will handle the SSL certificate issuance and proxy the traffic to Listmonk seamlessly.

Step 3: Accessing Listmonk webUI

Once Listmonk is up and running, it’s time to access the web interface and complete the initial setup.

Open your browser and navigate to your domain or IP address where Listmonk is hosted. If you’ve configured HTTPS, the URL should look something like this:

https://listmonk.yourdomain.com

and you’ll be greeted with the login page. Click Login to proceed.

Creating the admin user

On the login screen, you’ll be prompted to create an administrator account. Enter your email address, a username, and a secure password, then click Continue.

creating admin account for listmonk

This account will serve as the primary admin for managing Listmonk.

Configure general settings

Once logged in, navigate to Settings > Settings in the left sidebar. Under the General tab, customize the following:

  • Site Name: Enter a name for your Listmonk instance.
  • Root URL: Replace the default http://localhost:9000 with your domain (e.g., https://listmonk.yourdomain.com).
  • Admin Email: Add an email address for administrative notifications.

Click Save to apply these changes.

editing general settings

Configure SMTP settings

To send emails, you’ll need to configure SMTP settings:

  1. Click on the SMTP tab in the settings.
  2. Fill in the details:
    • Host: smtp.emailhost.com
    • Port: 465
    • Auth Protocol: Login
    • Username: Your email address
    • Password: Your email password (or Gmail App password, generated via Google’s security settings)
    • TLS: SSL/TLS
  3. Click Save to confirm the settings.
adding smtp settings to send emails

Create a new campaign list

Now, let’s create a list to manage your subscribers:

  1. Go to All Lists in the left sidebar and click + New.
  2. Give your list a name, set it to Public, and choose between Single Opt-In or Double Opt-In.
  3. Add a description, then click Save.
creating a test newsletter

Your newsletter subscription form will now be available at:

https://listmonk.yourdomain.com/subscription/form

newsletter subscribe page

With everything set up and running smoothly, it’s time to put Listmonk to work.

You can easily import your existing subscribers, customize the look and feel of your emails, and even change the logo to match your brand.

Final thoughts

And that’s it! You’ve successfully set up Listmonk, configured SMTP, and created your first campaign list. From here, you can start sending newsletters and growing your audience.

I’m currently testing Listmonk for my own newsletter solution on my website, and while it’s a robust solution, I’m curious to see how it performs in a production environment.

That said, I’m genuinely impressed by the thought and effort that Kailash Nadh and the contributors have put into this software, it’s a remarkable achievement.

For any questions or challenges you encounter, the Listmonk GitHub page is an excellent resource and the developers are highly responsive.

Finally, I’d love to hear your thoughts! Share your feedback, comments, or suggestions below. I’d love to hear about your experience with Listmonk and how you’re using it for your projects.

Happy emailing! 📨

https://linuxhandbook.com/content/images/2025/01/listmon-self-hosting.png

File descriptors are a core concept in Linux and other Unix-like operating systems. They provide a way for programs to interact with files, devices, and other input/output (I/O) resources.

Simply put, a file descriptor is like a "ticket" or "handle" that a program uses to access these resources. Every time a program opens a file or creates an I/O resource (like a socket or pipe), the operating system assigns it a unique number called a file descriptor.

This number allows the program to read, write, or perform other operations on the resource.

And as we all know, in Linux, almost everything is treated as a file—whether it's a text file, a keyboard input, or even network communication. File descriptors make it possible to handle all these resources in a consistent and efficient way.

What Are File Descriptors?

A file descriptor is a non-negative integer assigned by your operating system whenever a program opens a file or another I/O resource. It acts as an identifier that the program uses to interact with the resource.

For example:

  • When you open a text file, the operating system assigns it a file descriptor (e.g., 3).
  • If you open another file, it gets the next available file descriptor (e.g., 4).

These numbers are used internally by the program to perform operations like reading from or writing to the resource.

This simple mechanism allows programs to interact with different resources without needing to worry about how these resources are implemented underneath.

For instance, whether you're reading from a keyboard or writing to a network socket, you use file descriptors in the same way!

The three standard file descriptors

Every process in Linux starts with three predefined file descriptors: Standard Input (stdin), Standard Output (stdout), and Standard Error (stderr).

Here's a brief summary of their use:

Descriptor Integer Value Symbolic Constant Purpose
stdin 0 STDIN_FILENO Standard input (keyboard input by default)
stdout 1 STDOUT_FILENO Standard output (screen output by default)
stderr 2 STDERR_FILENO Standard error (error messages by default)

Now, let's address each file descriptor with details.

1. Standard Input (stdin)- Descriptor: 0

The purpose of the standard input stream is to receive input data. By default, it reads input from the keyboard unless redirected to another source like a file or pipe. Programs use stdin to accept user input interactively or process data from external sources.

When you type something into the terminal and press Enter, the data is sent to the program's stdin. This stream can also be redirected to read from files or other programs using shell redirection operators (<).

One simple example of stdin would be a script that takes input from the user and prints it:

#!/bin/bash

# Prompt the user to enter their name
echo -n "Enter your name: "

# Read the input from the user
read name

# Print a greeting message
echo "Hello, $name!"

Here's what the output looks like:

But there is another way of using the input stream–redirecting the input itself. You can create a text file and redirect the input stream.

For example, here I have created a sample text file named input.txt which contains my name Satoshi. Later I redirected the input stream using <:

As you can see, rather than waiting for my input, it took data from the text file and we somewhat automated this.

2. Standard Output (stdout)- Descriptor: 1

The standard output stream is used for displaying normal output generated by programs. By default, it writes output to the terminal screen unless redirected elsewhere.

In simple terms, programs use stdout to print results or messages. This stream can be redirected to write output to files or other programs using shell operators (> or |).

Let's take a simple script that prints a greeting message:

#!/bin/bash

# Print a message to standard output
echo "This is standard output."

Here's the simple output (nothing crazy but a decent example):

stdout sample script

Now, if I want to redirect the output to a file, rather than showing it on the terminal screen, then I can use > as shown here:

./stdout.sh > output.txt
change output datastream

Another good example can be the redirecting output of a command to a text file:

ls > output.txt
Redirect output of command to text file

3. Standard Error (stderr)- Descriptor: 2

The standard error stream is used for displaying error messages and diagnostics. It is separate from stdout so that errors can be handled independently of normal program output.

For better understanding, I wrote a script that will trigger the stderr signal as I have used the exit 1 to mimic a faulty execution:

#!/bin/bash

# Print a message to standard output
echo "This is standard output."

# Print an error message to standard error
echo "This is an error message." >&2

# Exit with a non-zero status to indicate an error
exit 1

But if you were to execute this script, it would simply print "This is an error message." To understand better, you can redirect the output and error to different files.

For example, here, I have redirected the error message to stderr.log and the normal output will go into stdout.log:

./stderr.sh > stdout.log 2> stderr.log

Bonus: Types of limits on file descriptors

Linux kernel puts a limit on the number of file descriptors a process can use. These limits help manage system resources and prevent any single process from using too many. There are different types of limits, each serving a specific purpose.

  • Soft Limits: The default maximum number of file descriptors a process can open. Users can temporarily increase this limit up to the hard limit for their session.
  • Hard Limits: The absolute maximum number of file descriptors a process can open. Only the system admin can increase this limit to ensure system stability.
  • Process-Level Limits: Each process has its own set of file descriptor limits, inherited from its parent process, to prevent any single process from overusing resources.
  • System-Level Limits: The total number of file descriptors available across all processes on the system. This ensures fairness and prevents global resource exhaustion.
  • User-Level Limits: Custom limits set for specific users or groups to allocate resources differently based on their needs.

Wrapping Up...

In this explainer, I went through what file descriptors are in Linux and shared some practical examples to explain their function. I tried to cover the types of limits in detail but then I had to drop the "detail" to stick to the main idea of this article.

But if you want, I can surely write a detailed article on the types of limits on file descriptors. Also, if you have any questions or suggestions, leave us a comment.

https://linuxhandbook.com/content/images/2025/01/file-descriptor-in-linux.png
I don’t like my prompt, i want to change it. it has my username and host, but the formatting is not what i want. This blog will get you started quickly on doing exactly that.

This is my current prompt below:

To change the prompt you will update .bashrc and set the PS1 environment variable to a new value.

Here is a cheatsheet of the prompt options:

You can use these placeholders for customization:

\u – Username
\h – Hostname
\w – Current working directory
\W – Basename of the current working directory
\$ – Shows $ for a normal user and # for the root user
\t – Current time (HH:MM:SS)
\d – Date (e.g., "Mon Jan 05")
\! – History number of the command
\# – Command number

I want to change my prompt to say
Here is my new prompt I am going to use:

export PS1="linuxhint@mybox \w: "

Can you guess what that does? Yes for my article writing this is exactly what i want. Here is the screenshot:

A lot of people will want the Username, Hostname, for my example i don’t! But you can use \u and \h for that. I used \w to show what directory i am in. You can also show date and time, etc.

You can also play with setting colors in the prompt with these variables:

Foreground Colors:
\e[30m – Black
\e[31m – Red
\e[32m – Green
\e[33m – Yellow
\e[34m – Blue
\e[35m – Magenta
\e[36m – Cyan
\e[37m – White

Background Colors:
\e[40m – Black
\e[41m – Red
\e[42m – Green
\e[43m – Yellow
\e[44m – Blue
\e[45m – Magenta
\e[46m – Cyan
\e[47m – White
Reset Color:
\e[0m – Reset to default

Here is my colorful version. The backslashes are primarily needed to ensure proper formatting of the prompt and avoid breaking its functionality.

export PS1="\[\e[35m\]linuxhint\[\e[0m\]@\[\e[34m\]mybox\[\e[0m\] \[\e[31m\]\w\[\e[0m\]: "


This uses Magenta, Blue and Red coloring for different parts of the prompt.

Conclusion

You can see how to customize your bash prompt with PS1 environment in Ubuntu. Hope this helps you be happy with your environment in linux.

image

In Bash version 4, associative arrays were introduced, and from that point, they solved my biggest problem with arrays in Bash—indexing. Associative arrays allow you to create key-value pairs, offering a more flexible way to handle data compared to indexed arrays.

In simple terms, you can store and retrieve data using string keys, rather than numeric indices as in traditional indexed arrays.

But before we begin, make sure you are running the bash version 4 or above by checking the bash version:

echo $BASH_VERSION
check the bash version

If you are running bash version 4 or above, you can access the associative array feature.

Using Associative arrays in bash

Before I walk you through the examples of using associative arrays, I would like to mention the key differences between Associative and indexed arrays:

Feature Indexed Arrays Associative Arrays
Index Type Numeric (e.g., 0, 1, 2) String (e.g., "name", "email")
Declaration Syntax declare -a array_name declare -A array_name
Access Syntax ${array_name[index]} ${array_name["key"]}
Use Case Sequential or numeric data Key-value pair data

Now, let's take a look at what you are going to learn in this tutorial on using Associative arrays:

  • Declaring an Associative array
  • Assigning values to an array
  • Accessing values of an array
  • Iterating over an array's elements

1. How to declare an Associative array in bash

To declare an associative array in bash, all you have to do is use the declare command with the -A flag along with the name of the array as shown here:

declare -A Array_name

For example, if I want to declare an associative array named LHB, then I would use the following command:

declare -A LHB
declare associative array in bash

2. How to add elements to an Associative array

There are two ways you can add elements to an Associative array: You can either add elements after declaring an array or you can add elements while declaring an array. I will show you both.

Adding elements after declaring an array

This is quite easy and recommended if you are getting started with bash scripting. In this method, you add elements to the already declared array one by one.

To do so, you have to use the following syntax:

my_array[key1]="value1"

In my case, I have assigned two values using two key pairs to the LHB array:

LHB[name]="Satoshi"
LHB[age]="25"
Assign values to the associative array

Adding elements while declaring an array

If you want to add elements while declaring the associative array itself, you can follow the given command syntax:

declare -A my_array=(
    [key1]="value1"
    [key2]="value2"
    [key3]="value3"
)

For example, here, I created a new associated array and added three elements:

declare -A myarray=(
    [Name]="Satoshi"
    [Age]="25"
    [email]="satoshi@xyz.com"
)
Assign values to the associative array while creating array

3. Create a read-only Associative array

If you want to create a read-only array (for some reason), you'd have to use the -r flag while creating an array:

declare -rA my_array=(
    [key1]="value1"
    [key2]="value2"
    [key3]="value3"
)

Here, I created a read-only Associative array named MYarray:

declare -rA MYarray=(
    [City]="Tokyo"
    [System]="Ubuntu"
    [email]="satoshi@xyz.com"
)

Now, if I try to add a new element to this array, it will throw an error saying "MYarray: read-only variable":

Can not add additional elements to read-only associative array

4. Print keys and values of an Associative array

If you want to print the value of a specific key (similar to printing the value of a specific indexed element), you can simply use the following syntax for that purpose:

echo ${my_array[key1]}

For example, if I want to print the value of email key from the myarray array, I would use the following:

echo ${myarray[email]}
Print value of a key in associative array

The method of printing all the keys and elements of an Associative array is mostly the same. To print all keys at once, use ${!my_array[@]} which will retrieve all the keys in the associative array:

echo "Keys: ${!my_array[@]}"

If I want to print all the keys of myarray, then I would use the following:

echo "Keys: ${!myarray[@]}"
Print keys at once

On the other hand, if you want to print all the values of an Associative array, use ${my_array[@]} as shown here:

echo "Values: ${my_array[@]}"

To print values of the myarray, I used the below command:

echo "Values: ${myarray[@]}"
Print values of associate array at once

5. Find the Length of the Associative Array

The method for finding the length of the associative array is exactly the same as you do with the indexed arrays. You can use the ${#array_name[@]} syntax to find this count as shown here:

echo "Length: ${#my_array[@]}"

If I want to find a length of myarray array, then I would use the following:

echo "Length: ${#myarray[@]}"
Find length of associative array

6. Iterate over an Associative array

Iterating over an associative array allows you to process each key-value pair. In Bash, you can loop through:

  • The keys using ${!array_name[@]}.
  • The corresponding values using ${array_name[$key]}.

This is useful for tasks like displaying data, modifying values, or performing computations. For example, here I wrote a simple for loop to print the keys and elements accordingly:

for key in "${!myarray[@]}"; do
    echo "Key: $key, Value: ${myarray[$key]}"
done
Iterate over associative array

7. Check if a key exists in the Associative array

Sometimes, you need to verify whether a specific key exists in an associative array. Bash provides the -v operator for this purpose.

Here, I wrote a simple if else script that uses the -v flag to check if a key exists in the myarray array:

if [[ -v myarray["username"] ]]; then
    echo "Key 'username' exists"
else
    echo "Key 'username' does not exist"
fi
check if a key pair exist in associative array

8. Clear Associative array

If you want to remove specific keys from the associative array, then you can use the unset command along with a key you want to remove:

unset my_array["key1"]

For example, if I want to remove the email key from the myarray array, then I will use the following:

unset myarray["email"]
Remove key pairs from associative array

9. Delete the Associative array

If you want to delete the associative array, all you have to do is use the unset command along with the array name as shown here:

unset my_array

For example, if I want to delete the myarray array, then I would use the following:

unset myarray
delete associative array

Wrapping Up...

In this tutorial, I went through the basics of the associative array with multiple examples. I hope you will find this guide helpful.

If you have any questions or suggestions, leave us a comment.

https://linuxhandbook.com/content/images/2024/12/associative-array-bash.png
In this post I will show you how to install the ZSH shell on Rocky Linux. ZSH is an alternate shell that some people prefer instead of BASH shell. Some people say ZSH has better auto-completion, theme support, and plugin system. If you want to give ZSH a try its quite easy to install and give it a try. This post is focused on the Rocky Linux user and how to install ZSH and get started with its usage.

Before installing anything new, it’s good practice to update your system packages:

sudo dnf update

It might be easier than you think to install and use a new shell. First install the package like this:

sudo dnf install zsh

Now you can enter a session of zsh be invoking the shell’s name ‘zsh’.

zsh

You might not be sure if it succeeded, how you can verify which sell you are using now?

echo $0

You should see some output like the following:

[root@mypc]~# echo $0:
zsh:
[root@mypc]~#

ok good, if it says bash or something other than zsh you have a problem with your setup. Now lets run a couple basic commands

Example 1: Print all numbers from 1 to 10. In Zsh, you can use a for loop to do this:

for i in {1..10}; do echo $i; done

Example 2: Create a variable to store your username and then print it. You can use the $USER environment variable which automatically contains your username:

my_username=$USER
echo $my_username

Example 3: Echo a string that says “I love $0”. The $0 variable in a shell script or interactive shell session refers to the name of the script or shell being run. Here’s how to use it:

echo "I love $0"

When run in an interactive Zsh session, this will output something like “I love -zsh” if you’re in a login shell, or “I love zsh” if not.

Conclusion

Switching shells in a linux system is easy due to the modularity. Now that you see how to install ZSH you may like it and decide to use it as your preferred shell.

Even on Linux, you can enjoy gaming and interact with fellow gamers via Steam. As a Linux gamer, Steam is a handy game distribution platform that allows you to install different games, including purchased ones. Moreover, with Steam, you can connect with other games and play multiplayer titles.Steam is a cross-platform game distribution platform that offers games the option of purchasing and installing games on any device through a Steam account. This post gives different options for installing Steam on Ubuntu 24.04.

Different Methods of Installing Steam on Ubuntu 24.04

No matter the Ubuntu version that you use, there are three easy ways of installing Steam. For our guide, we are working on Ubuntu 24.04, and we’ve detailed the steps to follow for each method. Take a look!

Method 1: Install Steam via Ubuntu Repository

On your Ubuntu, Steam can be installed through the multiverse repository by following the steps below.
Step 1: Add the Multiverse Repository
The multiverse repository isn’t added on Ubuntu by default but executing the following command will add it.

$ sudo add-apt-multiverse

steam-1.png

Step 2: Refresh the Package Index
After adding the new repository, we must refresh the package index before we can install Steam.

$ sudo apt update

steam-2.png

Step 3: Install Steam
Lastly, install Steam from the repository by running the APT command below.

$ sudo apt install steam

steam-3.png

Method 2: Install Steam as a Snap

Steam is available as a snap package and you can install it by accessing the Ubuntu 24.04 App Center or by installing via command-line.
To install it via GUI, use the below steps.

Step 1: Search for Steam on App Center

On your Ubuntu, open the App Center and search for “Steam” in the search box. Different results will open and the first one is what we want to install.

steam-5.png

Step 2: Install Steam

On the search results page, click on Steam to open a window showing a summary of its information. Locate the green Install button and click on it.

steam-6.png

You will get prompted to enter your password before the installation can begin.

steam-7.png

Once you do so, a window showing the progress bar of the installation process will appear. Once the process completes, you will have Steam installed and ready for use on your Ubuntu 24.04.

Alternatively, if you prefer using the command-line option to install Steam from App Center, you can do so using the snap command. Specify the package when running your command as shown below.

$ sudo snap install steam

steam-8.png

On the output, the download and installation progress will be shown and once it completes, Steam will be available from your applications. You can open it and set it up for your gaming.

Method 3: Download and Install the Steam Package

Steam releases a .deb package for Linux and by downloading it, you can use it to install Steam. Unlike the previous methods, this method requires downloading the Steam package from its website using command line utilities such as wget or curl.

Step 1: Install wget

To download the Steam .deb package, we will use wget. You can skip this step if you already have it installed. Otherwise, execute the below command.

$ sudo apt install wget

steam-9.png

Step 2: Download the Steam Package

With wget installed, run the following command to download the Steam .deb package.

$ wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb

steam-10.png

Step 3: Install Steam

To install the .deb package, we will use the dpkg command below.

$ sudo dpkg -i steam.deb

steam-11.png

Once Steam completes installing, verify that you can access it by searching for it on your Ubuntu 24.04.

steam-12.png

With that, you now have Steam installed on Ubuntu.

Conclusion

Steam is handy tool for any gamer and its cross-platform nature means you can install it on Ubuntu 24.04. we’ve given three installation methods you can use depending on your preference. Once you’ve installed Steam, configure it and create your account to start utilizing it. Happy gaming!

Proxmox VE 8 is one of the best open-source and free Type-I hypervisors out there for running QEMU/KVM virtual machines (VMs) and LXC containers. It has a nice web management interface and a lot of features.

One of the most amazing features of Proxmox VE is that it can passthrough PCI/PCIE devices (i.e. an NVIDIA GPU) from your computer to Proxmox VE virtual machines (VMs). The PCI/PCIE passthrough is getting better and better with newer Proxmox VE releases. At the time of this writing, the latest version of Proxmox VE is Proxmox VE v8.1 and it has great PCI/PCIE passthrough support.

In this article, I am going to show you how to configure your Proxmox VE 8 host/server for PCI/PCIE passthrough and configure your NVIDIA GPU for PCIE passthrough on Proxmox VE 8 virtual machines (VMs).

 

Table of Contents

  1. Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard
  2. Installing Proxmox VE 8
  3. Enabling Proxmox VE 8 Community Repositories
  4. Installing Updates on Proxmox VE 8
  5. Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard
  6. Enabling IOMMU on Proxmox VE 8
  7. Verifying if IOMMU is Enabled on Proxmox VE 8
  8. Loading VFIO Kernel Modules on Proxmox VE 8
  9. Listing IOMMU Groups on Proxmox VE 8
  10. Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM)
  11. Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8
  12. Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8
  13. Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8
  14. Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM)
  15. Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)?
  16. Conclusion
  17. References

 

Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard

Before you can install Proxmox VE 8 on your computer/server, you must enable the hardware virtualization feature of your processor from the BIOS/UEFI firmware of your motherboard. The process is different for different motherboards. So, if you need any assistance in enabling hardware virtualization on your motherboard, read this article.

 

Installing Proxmox VE 8

Proxmox VE 8 is free to download, install, and use. Before you get started, make sure to install Proxmox VE 8 on your computer. If you need any assistance on that, read this article.

 

Enabling Proxmox VE 8 Community Repositories

Once you have Proxmox VE 8 installed on your computer/server, make sure to enable the Proxmox VE 8 community package repositories.

By default, Proxmox VE 8 enterprise package repositories are enabled and you won’t be able to get/install updates and bug fixes from the enterprise repositories unless you have bought Proxmox VE 8 enterprise licenses. So, if you want to use Proxmox VE 8 for free, make sure to enable the Proxmox VE 8 community package repositories to get the latest updates and bug fixes from Proxmox for free.

 

Installing Updates on Proxmox VE 8

Once you’ve enabled the Proxmox VE 8 community package repositories, make sure to install all the available updates on your Proxmox VE 8 server.

 

Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard

The IOMMU configuration is found in different locations in different motherboards. To enable IOMMU on your motherboard, read this article.

 

Enabling IOMMU on Proxmox VE 8

Once the IOMMU is enabled on the hardware side, you also need to enable IOMMU from the software side (from Proxmox VE 8).

To enable IOMMU from Proxmox VE 8, you have the add the following kernel boot parameters:

Processor Vendor Kernel boot parameters to add
Intel intel_iommu=on, iommu=pt
AMD iommu=pt

 

To modify the kernel boot parameters of Proxmox VE 8, open the /etc/default/grub file with the nano text editor as follows:

$ nano /etc/default/grub

 

At the end of the GRUB_CMDLINE_LINUX_DEFAULT, add the required kernel boot parameters for enabling IOMMU depending on the processor you’re using.

As I am using an AMD processor, I have added only the kernel boot parameter iommu=pt at the end of the GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/default/grub file.

 

Now, update the GRUB boot configurations with the following command:

$ update-grub2

 

Once the GRUB boot configurations are updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Verifying if IOMMU is Enabled on Proxmox VE 8

To verify whether IOMMU is enabled on Proxmox VE 8, run the following command:

$ dmesg | grep -e DMAR -e IOMMU

 

If IOMMU is enabled, you will see some outputs confirming that IOMMU is enabled.

If IOMMU is not enabled, you may not see any outputs.

 

You also need to have the IOMMU Interrupt Remapping enabled for PCI/PCIE passthrough to work.

To check if IOMMU Interrupt Remapping is enabled on your Proxmox VE 8 server, run the following command:

$ dmesg | grep 'remapping'

 

As you can see, IOMMU Interrupt Remapping is enabled on my Proxmox VE 8 server.

NOTE: Most modern AMD and Intel processors will have IOMMU Interrupt Remapping enabled. If for any reason, you don’t have IOMMU Interrupt Remapping enabled, there’s a workaround. You have to enable Unsafe Interrupts for VFIO. Read this article for more information on enabling Unsafe Interrupts on your Proxmox VE 8 server.

 

Loading VFIO Kernel Modules on Proxmox VE 8

The PCI/PCIE passthrough is done mainly by the VFIO (Virtual Function I/O) kernel modules on Proxmox VE 8. The VFIO kernel modules are not loaded at boot time by default on Proxmox VE 8. But, it’s easy to load the VFIO kernel modules at boot time on Proxmox VE 8.

First, open the /etc/modules-load.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modules-load.d/vfio.conf

 

Type in the following lines in the /etc/modules-load.d/vfio.conf file.

vfio

vfio_iommu_type1

vfio_pci

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes.

 

Now, update the initramfs of your Proxmox VE 8 installation with the following command:

$ update-initramfs -u -k all

 

Once the initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Once your Proxmox VE 8 server boots, you should see that all the required VFIO kernel modules are loaded.

$ lsmod | grep vfio

 

Listing IOMMU Groups on Proxmox VE 8

To passthrough PCI/PCIE devices on Proxmox VE 8 virtual machines (VMs), you will need to check the IOMMU groups of your PCI/PCIE devices quite frequently. To make checking for IOMMU groups easier, I decided to write a shell script (I got it from GitHub, but I can’t remember the name of the original poster) in the path /usr/local/bin/print-iommu-groups so that I can just run print-iommu-groups command and it will print the IOMMU groups on the Proxmox VE 8 shell.

 

First, create a new file print-iommu-groups in the path /usr/local/bin and open it with the nano text editor as follows:

$ nano /usr/local/bin/print-iommu-groups

 

Type in the following lines in the print-iommu-groups file:

#!/bin/bash

shopt -s nullglob

for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do

echo "IOMMU Group ${g##*/}:"

for d in $g/devices/*; do

echo -e "\t$(lspci -nns ${d##*/})"

done;

done;

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes to the print-iommu-groups file.

 

Make the print-iommu-groups script file executable with the following command:

$ chmod +x /usr/local/bin/print-iommu-groups

 

Now, you can run the print-iommu-groups command as follows to print the IOMMU groups of the PCI/PCIE devices installed on your Proxmox VE 8 server:

$ print-iommu-groups

 

As you can see, the IOMMU groups of the PCI/PCIE devices installed on my Proxmox VE 8 server are printed.

 

Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM)

To passthrough a PCI/PCIE device to a Proxmox VE 8 virtual machine (VM), it must be in its own IOMMU group. If 2 or more PCI/PCIE devices share an IOMMU group, you can’t passthrough any of the PCI/PCIE devices of that IOMMU group to any Proxmox VE 8 virtual machines (VMs).

So, if your NVIDIA GPU and its audio device are on its own IOMMU group, you can passthrough the NVIDIA GPU to any Proxmox VE 8 virtual machines (VMs).

On my Proxmox VE 8 server, I am using an MSI X570 ACE motherboard paired with a Ryzen 3900X processor and Gigabyte RTX 4070 NVIDIA GPU. According to the IOMMU groups of my system, I can passthrough the NVIDIA RTX 4070 GPU (IOMMU Group 21), RTL8125 2.5Gbe Ethernet Controller (IOMMU Group 20), Intel I211 Gigabit Ethernet Controller (IOMMU Group 19), a USB 3.0 controller (IOMMU Group 24), and the Onboard HD Audio Controller (IOMMU Group 25).

$ print-iommu-groups

 

As the main focus of this article is configuring Proxmox VE 8 for passing through the NVIDIA GPU to Proxmox VE 8 virtual machines, the NVIDIA GPU and its Audio device must be in its own IOMMU group.

 

Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8

To passthrough a PCI/PCIE device on a Proxmox VE 8 virtual machine (VM), you must make sure that Proxmox VE forces it to use the VFIO kernel module instead of its original kernel module.

To find out the kernel module your PCI/PCIE devices are using, you will need to know the vendor ID and device ID of these PCI/PCIE devices. You can find the vendor ID and device ID of the PCI/PCIE devices using the print-iommu-groups command.

$ print-iommu-groups

 

For example, the vendor ID and device ID of my NVIDIA RTX 4070 GPU is 10de:2786 and it’s audio device is 10de:22bc.

 

To find the kernel module a PCI/PCIE device 10de:2786 (my NVIDIA RTX 4070 GPU) is using, run the lspci command as follows:

$ lspci -v -d 10de:2786

 

As you can see, my NVIDIA RTX 4070 GPU is using the nvidiafb and nouveau kernel modules by default. So, they can’t be passed to a Proxmox VE 8 virtual machine (VM) at this point.

 

The Audio device of my NVIDIA RTX 4070 GPU is using the snd_hda_intel kernel module. So, it can’t be passed on a Proxmox VE 8 virtual machine at this point either.

$ lspci -v -d 10de:22bc

 

So, to passthrough my NVIDIA RTX 4070 GPU and its audio device on a Proxmox VE 8 virtual machine (VM), I must blacklist the nvidiafb, nouveau, and snd_hda_intel kernel modules and configure my NVIDIA RTX 4070 GPU and its audio device to use the vfio-pci kernel module.

 

Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8

To blacklist kernel modules on Proxmox VE 8, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the kernel modules nouveau, nvidiafb, and snd_hda_intel kernel modules (to passthrough NVIDIA GPU), add the following lines in the /etc/modprobe.d/blacklist.conf file:

blacklist nouveau

blacklist nvidiafb

blacklist snd_hda_intel

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/blacklist.conf file.

 

Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8

To configure the PCI/PCIE device (i.e. your NVIDIA GPU) to use the VFIO kernel module, you need to know their vendor ID and device ID.

In this case, the vendor ID and device ID of my NVIDIA RTX 4070 GPU and its audio device are 10de:2786 and 10de:22bc.

 

To configure your NVIDIA GPU to use the VFIO kernel module, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure your NVIDIA GPU and its audio device with the <vendor-id>:<device-id> 10de:2786 and 10de:22bc (let’s say) respectively to use the VFIO kernel module, add the following line to the /etc/modprobe.d/vfio.conf file.

options vfio-pci ids=10de:2786,10de:22bc

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/vfio.conf file.

 

Now, update the initramfs of Proxmove VE 8 with the following command:

$ update-initramfs -u -k all

 

Once initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Once your Proxmox VE 8 server boots, you should see that your NVIDIA GPU and its audio device (10de:2786 and 10de:22bc in my case) are using the vfio-pci kernel module. Now, your NVIDIA GPU is ready to be passed to a Proxmox VE 8 virtual machine.

$ lspci -v -d 10de:2786

$ lspci -v -d 10de:22bc

 

Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM)

Now that your NVIDIA GPU is ready for passthrough on Proxmox VE 8 virtual machines (VMs), you can passthrough your NVIDIA GPU on your desired Proxmox VE 8 virtual machine and install the NVIDIA GPU drivers depending on the operating system that you’re using on that virtual machine as usual.

For detailed information on how to passthrough your NVIDIA GPU on a Proxmox VE 8 virtual machine (VM) with different operating systems installed, read one of the following articles:

  • How to Passthrough an NVIDIA GPU to a Windows 11 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Ubuntu 24.04 LTS Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a LinuxMint 21 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Debian 12 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to an Elementary OS 8 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Fedora 39+ Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU on an Arch Linux Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU on a Red Hat Enterprise Linux 9 (RHEL 9) Proxmox VE 8 Virtual Machine (VM)

 

Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)?

Even after trying everything listed in this article correctly, if PCI/PCIE passthrough still does not work for you, be sure to try out some of the Proxmox VE PCI/PCIE passthrough tricks and/or workarounds that you can use to get PCI/PCIE passthrough work on your hardware.

 

Conclusion

In this article, I have shown you how to configure your Proxmox VE 8 server for PCI/PCIE passthrough so that you can passthrough PCI/PCIE devices (i.e. your NVIDIA GPU) to your Proxmox VE 8 virtual machines (VMs). I have also shown you how to find out the kernel modules that you need to blacklist and how to blacklist them for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine. Finally, I have shown you how to configure your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to use the VFIO kernel modules, which is also an essential step for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine (VM).

 

References

  1. PCI(e) Passthrough – Proxmox VE
  2. PCI Passthrough – Proxmox VE
  3. The ultimate gaming virtual machine on proxmox – YouTube

Anyone can easily run multiple operating systems on one host simultaneously, provided they have VirtualBox installed. Even for Ubuntu 24.04, you can install VirtualBox and utilize it to run any supported operating system.The best part about VirtualBox is that it is open-source virtualization software, and you can install and use it anytime. Whether you are stuck on how to install VirtualBox on Ubuntu 24.04 or looking to advance with other operating systems on top of your host, this post gives you two easy methods.

Two Methods of Installing VirtualBox on Ubuntu 24.04

There are different ways of installing VirtualBox on Ubuntu 24.04. For instance, you can retrieve a stable VirtualBox version from Ubuntu’s repository or add Oracle’s VirtualBox repository to install a specific version. Which method to use will depend on your requirements, and we’ve discussed the methods in the sections below.

Method 1: Install VirtualBox via APT
The easiest way of installing VirtualBox on Ubuntu 24.04 is by sourcing it from the official Ubuntu repository using APT.
Below are the steps you should follow.
Step 1: Update the Repository
In every installation, the first step involves refreshing the source list to update the package index by executing the following command.

$ sudo apt update

Step 2: Install VirtualBox
Once you’ve updated your package index, the next task is to run the install command below to fetch and install the VirtualBox package.

$ sudo apt install virtualbox

Step 3: Verify the Installation
After the installation, use the following command to check the installed version. The output also confirms that you successfully installed VirtualBox on Ubuntu 24.04.

$ VboxManage --version

Method 2: Install VirtualBox from Oracle’s Repository
The previous method shows that we installed VirtualBox version 7.0.14. However, if you visit the VirtualBox website, depending on when you read this post, it’s likely that the version we’ve installed may not be the latest.

Although the older VirtualBox versions are okay, installing the latest version is always the better option as it contains all patches and fixes. However, to install the latest version, you must add Oracle’s repository to your Ubuntu before you can execute the install command.

Step 1: Install Prerequisites
All the dependencies you require before you can add the Oracle VirtualBox repository can be installed when you install the software-properties-common package.

$ sudo apt install software-properties-common

Step 2: Add GPG Keys
GPG keys help verify the authenticity of repositories before we can add them to the system. The Oracle repository is a third-party repository, and by installing the GPG keys, it will be checked for integrity and authenticity.
Here’s how you add the GPG keys.

$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -

You will receive an output on your terminal showing that the key has been downloaded and installed.
Step 3: Add Oracle’s VirtualBox Repository
Oracle has a VirtualBox repository for all supported Operating Systems. To fetch this repository and add it to your /etc/apt/sources.list.d/, execute the following command.

$ echo "deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list

The output shows that a new repository entry has been created from which we will source VirtualBox when we execute the install command.

Step 4: Install VirtualBox
With the repository added, let’s first refresh the package index by updating it.

$ sudo apt update

Next, specify which VirtualBox you want to install using the below syntax.

$ sudo apt install virtualbox-[version]

For instance, if the latest version when reading this post is version 7.1, you would replace version in the above command with 7.1.

However, ensure that the specified version is available on the VirtualBox website. Otherwise, you will get an error as you can’t install something that can’t be found.

Conclusion

VirtualBox is an effective way of running numerous Operating Systems on one host simultaneously. This post shares two methods of installing VirtualBox on Ubuntu 24.04. First, you can install it via APT by sourcing it from the Ubuntu repository. Alternatively, you can add the Oracle repository and specify a specific version number for the VirtualBox you want to install.

In recent years, support for PCI/PCIE (i.e. GPU passthrough) has improved a lot in newer hardware. So, the regular Proxmox VE PCI/PCIE and GPU passthrough guide should work in most new hardware. Still, you may face many problems passing through GPUs and other PCI/PCIE devices on a Proxmox VE virtual machine. There are many tweaks/fixes/workarounds for some of the common Proxmox VE GPU and PCI/PCIE passthrough problems.

In this article, I am going to discuss some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.

 

Table of Contents

  1. What to do if IOMMU Interrupt Remapping is not Supported?
  2. What to do if My GPU (or PCI/PCIE Device) is not in its own IOMMU Group?
  3. How do I Blacklist AMD GPU Drivers on Proxmox VE?
  4. How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?
  5. How do I Blacklist Intel GPU Drivers on Proxmox VE?
  6. How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?
  7. I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  8. I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  9. I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  10. Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?
  11. Why Disable VGA Arbitration for the GPUs and How to Do It?
  12. What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?
  13. GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?
  14. What is AMD Vendor Reset Bug and How to Solve it?
  15. How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?
  16. What to do If Some Apps Crash the Proxmox VE Windows Virtual Machine?
  17. How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?.
  18. How to Update Proxmox VE initramfs?
  19. How to Update Proxmox VE GRUB Bootloader?
  20. Conclusion
  21. References

 

What to do If IOMMU Interrupt Remapping is not Supported?

For PCI/PCIE passthrough, IOMMU interrupt remapping is essential.

To check whether your processor supports IOMMU interrupt remapping, run the command below:

$ dmesg | grep -i remap

 

If your processor supports IOMMU interrupt remapping, you will see some sort of output confirming that interrupt remapping is enabled. Otherwise, you will see no outputs.

If IOMMU interrupt remapping is not supported on your processor, you will have to configure unsafe interrupts on your Proxmox VE server to passthrough PCI/PCIE devices on Proxmox VE virtual machines.

To configure unsafe interrupts on Proxmox VE, create a new file iommu_unsafe_interrupts.conf in the /etc/modprobe.d directory and open it with the nano text editor as follows:

$ nano /etc/modprobe.d/iommu_unsafe_interrupts.conf

 

Add the following line in the iommu_unsafe_interrupts.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

options vfio_iommu_type1 allow_unsafe_interrupts=1

 

Once you’re done, you must update the initramfs of your Proxmox VE server.

 

What to do if my GPU (or PCI/PCIE Device) is not in its own IOMMU Group?

If your server has multiple PCI/PCIE slots, you can move the GPU to a different PCI/PCIE slot and see if the GPU is in its own IOMMU group.

If that does not work, you can try enabling the ACS override kernel patch on Proxmox VE.

To try enabling the ACS override kernel patch on Proxmox VE, open the /etc/default/grub file with the nano text editor as follows:

$ nano /etc/default/grub

 

Add the kernel boot option pcie_acs_override=downstream at the end of the GRUB_CMDLINE_LINUX_DEFAULT.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.

You should have better IOMMU grouping once your Proxmox VE server boots.

If your GPU still does not have its own IOMMU group, you can go one step further by using the pcie_acs_override=downstream,multifunction instead. You should have an even better IOMMU grouping.

 

If pcie_acs_override=downstream,multifunction results in better IOMMU grouping that pcie_acs_override=downstream, then why use pcie_acs_override=downstream at all?

Well, the purpose of PCIE ACS override is to fool the kernel into thinking that the PCIE devices are isolated when they are not in reality. So, PCIE ACS override comes with security and stability issues. That’s why you should try using a less aggressive PCIE ACS override option pcie_acs_override=downstream first and see if your problem is solved. If pcie_acs_override=downstream does not work, only then you should use the more aggressive option pcie_acs_override=downstream,multifunction.

 

How do I Blacklist AMD GPU Drivers on Proxmox VE?

If you want to passthrough an AMD GPU on Proxmox VE virtual machines, you must blacklist the AMD GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the AMD GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist radeon

blacklist amdgpu

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?

If you want to passthrough an NVIDIA GPU on Proxmox VE virtual machines, you must blacklist the NVIDIA GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the NVIDIA GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist nouveau

blacklist nvidia

blacklist nvidiafb

blacklist nvidia_drm

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How do I Blacklist Intel GPU Drivers on Proxmox VE?

If you want to passthrough an Intel GPU on Proxmox VE virtual machines, you must blacklist the Intel GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the Intel GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist snd_hda_intel

blacklist snd_hda_codec_hdmi

blacklist i915

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?

To check if your GPU or desired PCI/PCIE devices are using the VFIO driver, run the following command:

$ lspci -v

 

If your GPU or PCI/PCIE device is using the VFIO driver, you should see the line Kernel driver in use: vfio-pci as marked in the screenshot below.

 

I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the AMD GPU drivers is not enough, you also have to configure the AMD GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the AMD GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep radeon pre: vfio-pci

softdep amdgpu pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the NVIDIA GPU drivers is not enough, you also have to configure the NVIDIA GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the NVIDIA GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

softdep nvidiafb pre: vfio-pci

softdep nvidia_drm pre: vfio-pci

softdep drm pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the Intel GPU drivers is not enough, you also have to configure the Intel GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the Intel GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep snd_hda_intel pre: vfio-pci

softdep snd_hda_codec_hdmi pre: vfio-pci

softdep i915 pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?

In the /etc/modprobe.d/vfio.conf file, you must add the IDs of all the PCI/PCIE devices that you want to use the VFIO driver in a single line. One device per line won’t work.

For example, if you have 2 GPUs that you want to configure to use the VFIO driver, you must add their IDs in a single line in the /etc/modprobe.d/vfio.conf file as follows:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>

 

If you want to add another GPU to the list, just append it at the end of the existing vfio-pci line in the /etc/modprobe.d/vfio.conf file as follows:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>,<GPU-3>,<GPU-3-Audio>

 

Never do this. Although it looks much cleaner, it won’t work. I do wish we could specify PCI/PCIE IDs this way.

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>

options vfio-pci ids=<GPU-2>,<GPU-2-Audio>

options vfio-pci ids=<GPU-3>,<GPU-3-Audio>

 

Why Disable VGA Arbitration for the GPUs and How to Do It?

If you’re using UEFI/OVMF BIOS on the Proxmox VE virtual machine where you want to passthrough the GPU, you can disable VGA arbitration which will reduce the legacy codes required during boot.

To disable VGA arbitration for the GPUs, add disable_vga=1 at the end of the vfio-pci option in the /etc/modprobe.d/vfio.conf file as shown below:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio> disable_vga=1

 

What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?

Even after doing everything correctly, if your GPU still does not use the VFIO driver, you will need to try booting Proxmox VE with kernel options that disable the video framebuffer.

On Proxmox VE 7.1 and older, the nofb nomodeset video=vesafb:off video=efifb:off video=simplefb:off kernel options disable the GPU framebuffer for your Proxmox VE server.

On Proxmox VE 7.2 and newer, the initcall_blacklist=sysfb_init kernel option does a better job at disabling the GPU framebuffer for your Proxmox VE server.

Open the GRUB bootloader configuration file /etc/default/grub file with the nano text editor with the following command:

$ nano /etc/default/grub

 

Add the kernel option initcall_blacklist=sysfb_init at the end of the GRUB_CMDLINE_LINUX_DEFAULT.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.

 

GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?

Once you’ve passed a GPU to a Proxmox VE virtual machine, make sure to use the Default Graphics card before you start the virtual machine. This way, you will be able to access the display of the virtual machine from the Proxmox VE web management UI, download the GPU driver installer on the virtual machine, and install it on the virtual machine.

Once the GPU driver is installed on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU that you’ve passed to the virtual machine as well.

 

Once the GPU driver is installed on the virtual machine and the screen of the virtual machine is displayed on the monitor connected to the GPU (passed to the virtual machine), power off the virtual machine and set the Display Graphic card of the virtual machine to none.

Once you’re set, the next time you power on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU (passed to the virtual machine) only, nothing will be displayed on the Proxmox VE web management UI. This way, you will have the same experience as using a real computer even though you’re using a virtual machine.

 

Remember, never use SPICE, VirtIO GPU, and VirGL GPU Display Graphic card on the Proxmox VE virtual machine that you’re configuring for GPU passthrough as it has a high chance of failure.

 

What is AMD Vendor Reset Bug and How to Solve it?

AMD GPUs have a well-known bug called “vendor reset bug”. Once an AMD GPU is passed to a Proxmox VE virtual machine, and you power off this virtual machine, you won’t be able to use the AMD GPU in another Proxmox VE virtual machine. At times, your Proxmox VE server will become unresponsive as a result. This is called the “vendor reset bug” of AMD GPUs.

The reason this happens is that AMD GPUs can’t reset themselves correctly after being passed to a virtual machine. To fix this problem, you will have to reset your AMD GPU properly. For more information on installing the AMD vendor reset on Proxmox VE, read this article and read this thread on Proxmox VE forum. Also, check the vendor reset GitHub page.

 

How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?

If you’ve installed the GPU on the first slot of your motherboard, you might not be able to passthrough the GPU in a Proxmox VE virtual machine by default. Some motherboards shadow the vBIOS of the GPU installed on the first slot by default which is the reason the GPU installed on the first slot of those motherboards can’t be passed to virtual machines.

The solution to this problem is to install the GPU on the second slot of the motherboard, extract the vBIOS of the GPU, install the GPU on the first slot of the motherboard, and passthrough the GPU to a Proxmox VE virtual machine along with the extracted vBIOS of the GPU.

NOTE: To learn how to extract the vBIOS of your GPU, read this article.

Once you’ve obtained the vBIOS for your GPU, you must store the vBIOS file in the /usr/share/kvm/ directory of your Proxmox VE server to access it.

Once the vBIOS file for your GPU is stored in the /usr/share/kvm/ directory, you need to configure your virtual machine to use it. Currently, there is no way to specify the vBIOS file for PCI/PCIE devices of Proxmox VE virtual machines from the Proxmox VE web management UI. So, you will have to do everything from the Proxmox VE shell/command-line.

You can find the Proxmox VE virtual machine configuration files in the /etc/pve/qemu-server/ directory of your Proxmox VE server. Each Proxmox VE virtual machine has one configuration file in this directory in the format <VM-ID>.conf.

For example, to open the Proxmox VE virtual machine configuration file (for editing) for the virtual machine ID 100, you will need to run the following command:

$ nano /etc/pve/qemu-server/100.conf

 

In the virtual machine configuration file, you will need to append romfile=<vBIOS-filename> in the hostpciX line which is responsible for passing the GPU on the virtual machine.

For example, if the vBIOS filename for my GPU is gigabyte-nvidia-1050ti.bin, and I have passed the GPU on the first slot (slot 0) of the virtual machine (hostpci0), then in the 100.conf file, the line should be as follows:

hostpci0: <PCI-ID-of-GPU>,x-vga=on,romfile=gigabyte-nvidia-1050ti.bin

 

Once you’re done, save the virtual machine configuration file by pressing <Ctrl> + X followed by Y and <Enter>, start the virtual machine, and check if the GPU passthrough is working.

 

What to do if Some Apps Crash the Proxmox VE Windows Virtual Machine?

Some apps such as GeForce Experience, Passmark, etc. might crash Proxmox VE Windows virtual machines. You might also experience a sudden blue screen of death (BSOD) on your Proxmox VE Windows virtual machines. The reason it happens is that the Windows virtual machine might try to access the model-specific registers (MSRs) that are not actually available and depending on how your hardware handles MSRs requests, your system might crash.

The solution to this problem is ignoring MSRs messages on your Proxmox VE server.

To configure MSRs on your Proxmox VE server, open the /etc/modprobe.d/kvm.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/kvm.conf

 

To ignore MSRs on your Proxmox VE server, add the following line to the /etc/modprobe.d/kvm.conf file.

options kvm ignore_msrs=1

 

Once MSRs are ignored, you might see a lot of MSRs warning messages in your dmesg system log. To avoid that, you can ignore MSRs as well as disable logging MSRs warning messages by adding the following line instead:

options kvm ignore_msrs=1 report_ignored_msrs=0

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/kvm.conf file and update the initramfs of your Proxmox VE server for the changes to take effect.

 

How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?

If you’ve passed the GPU to a Linux Proxmox VE virtual machine and you’re getting bad audio quality on the virtual machine, you will need to enable MSI (Message Signal Interrupt) for the audio device on the Proxmox VE virtual machine.

To enable MSI on the Linux Proxmox VE virtual machine, open the /etc/modprobe.d/snd-hda-intel.conf file with the nano text editor on the virtual machine with the following command:

$ sudo nano /etc/modprobe.d/snd-had-intel.conf

 

Add the following line and save the file by pressing <Ctrl> + X followed by Y and <Enter>.

options snd-hda-intel enable_msi=1

 

For the changes to take effect, reboot the Linux virtual machine with the following command:

$ sudo reboot

 

Once the virtual machine boots, check if MSI is enabled for the audio device with the following command:

$ sudo lspci -vv

 

If MSI is enabled for the audio device on the virtual machine, you should see the marked line in the audio device information.

 

How to Update Proxmox VE initramfs?

Every time you make any changes to files in the /etc/modules-load.d/ and /etc/modprobe.d/ directories, you must update the initramfs of your Proxmox VE 8 installation with the following command:

$ update-initramfs -u -k all

 

Once Proxmox VE initramfs is updated, reboot your Proxmox VE server for the changes to take effect.

$ reboot

 

How to Update Proxmox VE GRUB Bootloader?

Every time you update the Proxmox VE GRUB boot configuration file /etc/default/grub, you must update the GRUB bootloader for the changes to take effect.

To update the Proxmox VE GRUB bootloader with the new configurations, run the following command:

$ update-grub2

 

Once the GRUB bootloader is updated with the new configuration, reboot your Proxmox VE server for the changes to take effect.

$ reboot

 

Conclusion

In this article, have discussed some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.

 

References

  1. [TUTORIAL] – PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration | Proxmox Support Forum
  2. Ultimate Beginner’s Guide to Proxmox GPU Passthrough
  3. Reading and Writing Model Specific Registers in Linux
  4. The MSI Driver Guide HOWTO — The Linux Kernel documentation

 

 

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.