Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

  • Entries

    60
  • Comments

    0
  • Views

    833

Entries in this blog

by: Abhishek Prakash
Wed, 29 Jan 2025 20:04:25 +0530


What's in a name? Sometimes the name can be deceptive.

For example, in the Linux Tips and Tutorials section of this newsletter, I am sharing a few commands that have nothing to do with what their name indicates 😄

Here are the other highlights of this edition of LHB Linux Digest:

  • Nice and renice commands
  • ReplicaSet in Kubernetes
  • Self hosted code snippet manager
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by RELIANOID.

❇️Comprehensive Load Balancing Solutions For Modern Networks

RELIANOID’s load balancing solutions combine the power of SD-WAN, secure application delivery, and elastic load balancing to optimize traffic distribution and ensure unparalleled performance.

With features like a robust Web Application Firewall (WAF) and built-in DDoS protection, your applications remain secure and resilient against cyber threats. High availability ensures uninterrupted access, while open networking and user experience networking enhance flexibility and deliver a seamless experience across all environments, from on-premises to cloud.

Free Load Balancer Download | Community Edition by RELIANOID
Discover our Free Load Balancer | Community Edition | The best Open Source Load Balancing software for providing high availability and content switching services

📖 Linux Tips and Tutorials

Using nice and renice commands to change process priority.

Change Process Priority WIth nice and renice Commands
You can modify if a certain process should get priority in consuming CPU with nice and renice commands.
by: LHB Community
Wed, 29 Jan 2025 18:26:26 +0530


Kubernetes is a powerful container orchestration platform that enables developers to manage and deploy containerized applications with ease. One of its key components is the ReplicaSet, which plays a critical role in ensuring high availability and scalability of applications.

In this guide, we will explore the ReplicaSet, its purpose, and how to create and manage it effectively in your Kubernetes environment.

What is a ReplicaSet in Kubernetes?

A ReplicaSet in Kubernetes is a higher-level abstraction that ensures a specified number of pod replicas are running at all times. If a pod crashes or becomes unresponsive, the ReplicaSet automatically creates a new pod to maintain the desired state. This guarantees high availability and resilience for your applications.

The key purposes of a ReplicaSet include:

  • Scaling Pods: ReplicaSets manage the replication of pods, ensuring the desired number of replicas are always running.
  • High Availability: Ensures that your application remains available even if one or more pods fail.
  • Self-Healing: Automatically replaces failed pods to maintain the desired state.
  • Efficient Workload Management: Helps distribute workloads across nodes in the cluster.

How Does a ReplicaSet Work?

ReplicaSets rely on selectors to match pods using labels. It uses these selectors to monitor the pods and ensures the actual number of pods matches the specified replica count. If the number is less than the desired count, new pods are created. If it’s greater, excess pods are terminated.

Creating a ReplicaSet

To create a ReplicaSet, you define its configuration in a YAML file. Here’s an example:

Example YAML Configuration

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

In this YAML file:

  • replicas: Specifies the desired number of pod replicas.
  • selector: Matches pods with the label app=nginx.
  • template: Defines the pod’s specifications, including the container image and port.

Deploying a ReplicaSet

Once you have the YAML file ready, follow these steps to deploy it in your Kubernetes cluster.

Apply the YAML configuration to create the ReplicaSet:

kubectl apply -f nginx-replicaset.yaml

Verify that the ReplicaSet was created and the pods are running:

kubectl get replicaset

Output:

NAME                DESIRED   CURRENT   READY   AGE
nginx-replicaset    3         3         3       5s

View the pods to check the pods created by the ReplicaSet:

kubectl get pods

Output:

NAME                      READY   STATUS    RESTARTS   AGE
nginx-replicaset-xyz12    1/1     Running   0          10s
nginx-replicaset-abc34    1/1     Running   0          10s
nginx-replicaset-lmn56    1/1     Running   0          10s

Scaling a ReplicaSet

You can easily scale the number of replicas in a ReplicaSet. For example, to scale the above ReplicaSet to 5 replicas:

kubectl scale replicaset nginx-replicaset --replicas=5

Verify the updated state:

kubectl get replicaset

Output:

NAME                DESIRED   CURRENT   READY   AGE
nginx-replicaset    5         5         5       2m
Learn Kubernetes Operator
Learn to build, test and deploy Kubernetes Opeartor using Kubebuilder as well as Operator SDK in this course.

Conclusion

A ReplicaSet is an essential component of Kubernetes, ensuring the desired number of pod replicas are running at all times. By leveraging ReplicaSets, you can achieve high availability, scalability, and self-healing for your applications with ease.

Whether you’re managing a small application or a large-scale deployment, understanding ReplicaSets is crucial for effective workload management.

✍️
Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
by: Satoshi Nakamoto
Wed, 29 Jan 2025 16:53:22 +0530


A few years ago, we witnessed a shift to containers and in current times, I believe containers have become an integral part of the IT infrastructure for most companies.

Traditional monitoring tools often fall short in providing the visibility needed to ensure performance, security, and reliability.

According to my experience, monitoring resource allocation is the most important part of deploying containers and that is why I found the top container monitoring solutions offering real-time insights into your containerized environments.

Top Container Monitoring Solutions

Before I jump into details, here's a brief of all the tools which I'll be discussing in a moment:

Tool Pricing & Plans Free Tier? Key Free Tier Features Key Paid Plan Features
Middleware Free up to 100GB; pay-as-you-go at $0.3/GB; custom enterprise plans Yes Up to 100GB data, 1k RUM sessions, 20k synthetic checks, 14-day retention Unlimited data volume; data pipeline & ingestion control; single sign-on; dedicated support
Datadog Free plan (limited hosts & 1-day metric retention); Pro starts at $15/host/month; Enterprise from $23 Yes Basic infrastructure monitoring for up to 5 hosts; limited metric retention Extended retention, advanced anomaly detection, over 750 integrations, multi-cloud support
Prometheus & Grafana Open-source; no licensing costs Yes Full-featured metrics collection (Prometheus), custom dashboards (Grafana) Self-managed support only; optional managed services through third-party providers
Dynatrace 15-day free trial; usage-based: $0.04/hour for infrastructure-only, $0.08/hour for full-stack Trial Only N/A (trial only) AI-driven root cause analysis, automatic topology discovery, enterprise support, multi-cloud observability
Sematext Free plan (Basic) with limited container monitoring; paid plans start at $0.007/container/hour Yes Live metrics for a small number of containers, 30-minute retention, limited alert rules Increased container limits, extended retention, unlimited alert rules, full-stack monitoring
Sysdig Free tier; Sysdig Monitor starts at $20/host/month; Sysdig Secure is $60/host/month Yes Basic container monitoring, limited metrics and retention Advanced threat detection, vulnerability management, compliance checks, Prometheus support
SolarWinds No permanent free plan; pricing varies by module (starts around $27.50/month or $2995 single license) Trial Only N/A (trial only) Pre-built Docker templates, application-centric mapping, hardware health, synthetic monitoring
Splunk Observability Cloud starts at $15/host/month (annual billing); free trial available Trial Only N/A (trial only) Real-time log and metrics analysis, AI-based anomaly detection, multi-cloud integrations, advanced alerting
MetricFire Paid plans start at $19/month; free trial offered Trial Only N/A (trial only) Integration with Graphite and Prometheus, customizable dashboards, real-time alerts
SigNoz Open-source (self-hosted) or custom paid support Yes Full observability stack (metrics, traces, logs) with no licensing costs Commercial support, managed hosting services, extended retention options

Here, "N/A (trial only)" means that the tool does not offer a permanent free tier but provides a limited-time free trial for users to test its features. After the trial period ends, users must subscribe to a paid plan to continue using the tool. Essentially, there is no free version available for long-term use—only a temporary trial.

1. Middleware

Middleware

Middleware is an excellent choice for teams looking for a free or scalable container monitoring solution. It provides pre-configured dashboards for Kubernetes environments and real-time visibility into container health.

With a free tier supporting up to 100GB of data and a pay-as-you-go model at $0.3/GB thereafter, it’s ideal for startups or small teams.

Key features:

  • Pre-configured dashboards for Kubernetes
  • Real-time metrics tracking
  • Alerts for critical events
  • Correlation of metrics with logs and traces

Pros:

  • Free tier available
  • Easy setup with minimal configuration
  • Scalable pricing model

Cons:

  • Limited advanced features compared to premium tools

2. Datadog

datalog

Datadog is a premium solution offering observability across infrastructure, applications, and logs. Its auto-discovery feature makes it particularly suited for dynamic containerized environments.

The free plan supports up to five hosts with limited retention. Paid plans start at $15 per host per month.

Key features:

  • Real-time performance tracking
  • Anomaly detection using ML
  • Auto-discovery of new containers
  • Distributed tracing and APM

Pros:

  • Extensive integrations (750+)
  • User-friendly interface
  • Advanced visualization tools

Cons:

  • High cost for small teams
  • Pricing can vary based on usage spikes

3. Prometheus & Grafana

Prometheus & Grafana

This open-source duo provides powerful monitoring and visualization capabilities. Prometheus has an edge in metrics collection with its PromQL query language, while Grafana offers stunning visualizations.

This eventually makes it perfect for teams seeking customization without licensing costs.

Key features:

  • Time-series data collection
  • Flexible query language (PromQL)
  • Customizable dashboards
  • Integrated alerting system

Pros:

  • Free to use
  • Highly customizable
  • Strong community support

Cons:

  • Requires significant setup effort
  • Limited out-of-the-box functionality

4. Dynatrace

Dynatrace

Dynatrace is an AI-powered observability platform designed for large-scale hybrid environments. It automates topology discovery and offers you deep insights into containerized workloads. Pricing starts at $0.04/hour for infrastructure-only monitoring.

Key features:

  • AI-powered root cause analysis
  • Automatic topology mapping
  • Real-user monitoring
  • Cloud-native support (Kubernetes/OpenShift)

Pros:

  • Automated configuration
  • Scalability for large environments
  • End-to-end visibility

Cons:

  • Expensive for smaller teams
  • Proprietary platform limits flexibility

5. Sematext

Sematext

Sematext is a lightweight tool that allows users to monitor metrics and logs across Docker and Kubernetes platforms. Its free plan supports basic container monitoring with limited retention and alerting rules. Paid plans start at just $0.007/container/hour.

Key features:

  • Unified dashboard for logs and metrics
  • Real-time insights into containers and hosts
  • Auto-discovery of new containers
  • Anomaly detection and alerting

Pros:

  • Affordable pricing plans
  • Simple setup process
  • Full-stack observability features

Cons:

  • Limited advanced features compared to premium tools

7. SolarWinds

SolarWinds

SolarWinds offers an intuitive solution for SMBs needing straightforward container monitoring. While it doesn’t offer a permanent free plan, its pricing starts at around $27.50/month or $2995 as a one-time license fee.

Key features:

  • Pre-built Docker templates
  • Application-centric performance tracking
  • Hardware health monitoring
  • Dependency mapping

Pros:

  • Easy deployment and setup
  • Out-of-the-box templates
  • Suitable for smaller teams

Cons:

  • Limited flexibility compared to open-source tools

8. Splunk

Splunk

Splunk not only provides log analysis but also provides strong container monitoring capabilities through its Observability Cloud suite. Pricing starts at $15/host/month on annual billing.

Key features:

  • Real-time log and metrics analysis
  • AI-based anomaly detection
  • Customizable dashboards and alerts
  • Integration with OpenTelemetry standards

Pros:

  • Powerful search capabilities
  • Scalable architecture
  • Extensive integrations

Cons:

  • High licensing costs for large-scale deployments

9. MetricFire

MetricFire

It simplifies container monitoring by offering customizable dashboards and seamless integration with Kubernetes and Docker. MetricFire is ideal for teams looking for a reliable hosted solution without the hassle of managing infrastructure. Pricing starts at $19/month.

Key features:

  • Hosted Graphite and Grafana dashboards
  • Real-time performance metrics
  • Integration with Kubernetes and Docker
  • Customizable alerting systems

Pros:

  • Easy setup and configuration
  • Scales effortlessly as metrics grow
  • Transparent pricing model
  • Strong community support

Cons:

  • Limited advanced features compared to proprietary tools
  • Requires technical expertise for full customization

10. SigNoz

SigNoz

SigNoz is an open-source alternative to proprietary APM tools like Datadog and New Relic. It offers a platform for logs, metrics, and traces under one interface.

With native OpenTelemetry support and a focus on distributed tracing for microservices architectures, SigNoz is perfect for organizations seeking cost-effective yet powerful observability solutions.

Key features:

  • Distributed tracing for microservices
  • Real-time metrics collection
  • Centralized log management
  • Customizable dashboards
  • Native OpenTelemetry support

Pros:

  • Completely free if self-hosted
  • Active development community
  • Cost-effective managed cloud option
  • Comprehensive observability stack

Cons:

  • Requires infrastructure setup if self-hosted
  • Limited enterprise-level support compared to proprietary tools

Evaluate your infrastructure complexity and budget to select the best tool that aligns with your goals!

by: Abhishek Kumar
Thu, 23 Jan 2025 11:22:15 +0530


Imagine this: You’ve deployed a handful of Docker containers to power your favorite applications, maybe a self-hosted Nextcloud for your files, a Pi-hole for ad-blocking, or even a media server like Jellyfin.

Everything is running like a charm, but then you hit a common snag: keeping those containers updated.

When a new image is released, you’ll need to manually pull it, stop the running container, recreate it with the updated image, and hope everything works as expected.

Multiply that by the number of containers you’re running, and it’s clear how this quickly becomes a tedious and time-consuming chore.

But there’s more at stake than just convenience. Skipping updates or delaying them for too long can lead to outdated software running in your containers, which often means unpatched vulnerabilities.

These can become a serious security risk, especially if you’re hosting services exposed to the internet.

This is where Watchtower steps in, a tool designed to take the hassle out of container updates by automating the entire process.

Whether you’re running a homelab or managing a production environment, Watchtower ensures your containers are always up-to-date and secure, all with minimal effort on your part.

What is Watchtower?

Watchtower is an open-source tool that automatically monitors your Docker containers and updates them whenever a new version of their image is available.

It keeps your setup up-to-date, saving time and reducing the risk of running outdated containers.

But it’s not just a "set it and forget it" solution, it’s also highly customizable, allowing you to tailor its behavior to fit your workflow.

Whether you prefer full automation or staying in control of updates, Watchtower has you covered.

How does it work?

Watchtower works by periodically checking for updates to the images of your running containers. When it detects a newer version, it pulls the updated image, stops the current container, and starts a new one using the updated image.

The best part? It maintains your existing container configuration, including port bindings, volume mounts, and environment variables.

If your containers depend on each other, Watchtower handles the update process in the correct order to avoid downtime.

Deploying watchtower

Since you’re reading this article, I’ll assume you already have some sort of homelab or Docker setup where you want to automate container updates. That means I won’t be covering Docker installation here.

When it comes to deploying Watchtower, it can be done in two ways:

Docker run

If you’re just trying it out or want a straightforward deployment, you can run the following command:

docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower

This will spin up a Watchtower container that monitors your running containers and updates them automatically.

But here’s the thing, I’m not a fan of the docker run command.

It’s quick, sure, but I prefer stack approach rather than cramming everything into a single command.

Docker compose

If you facny using Docker Compose to run Watchtower, here’s a minimal configuration that replicates the docker run command above:

version: "3.8"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

To start Watchtower using this configuration, save it as docker-compose.yml and run:

docker-compose up -d

This will give you the same functionality as the docker run command, but in a cleaner, more manageable format.

Customizing watchtower with environment variables

Running Watchtower plainly is all good, but we can make it even better with environment variables and command arguments.

Personally, I don’t like giving full autonomy to one service to automatically make changes on my behalf.

Since I have a pretty decent homelab running crucial containers, I prefer using Watchtower to notify me about updates rather than updating everything automatically.

This ensures that I remain in control, especially for containers that are finicky or require a perfect pairing with their databases.

Sneak peak into my homelab

Take a look at my homelab setup: it’s mostly CMS containers for myself and for clients, and some of them can behave unpredictably if not updated carefully.

So instead of letting Watchtower update everything, I configure it to provide insights and alerts, and then I manually decide which updates to apply.

To achieve this, we’ll add the following environment variables to our Docker Compose file:

Environment Variable Description
WATCHTOWER_CLEANUP Removes old images after updates, keeping your Docker host clean.
WATCHTOWER_POLL_INTERVAL Sets how often Watchtower checks for updates (in seconds). One hour (3600 seconds) is a good balance.
WATCHTOWER_LABEL_ENABLE Updates only containers with specific labels, giving you granular control.
WATCHTOWER_DEBUG Enables detailed logs, which can be invaluable for troubleshooting.
WATCHTOWER_NOTIFICATIONS Configures the notification method (e.g., email) to keep you informed about updates.
WATCHTOWER_NOTIFICATION_EMAIL_FROM The email address from which notifications will be sent.
WATCHTOWER_NOTIFICATION_EMAIL_TO The recipient email address for update notifications.
WATCHTOWER_NOTIFICATION_EMAIL_SERVER SMTP server address for sending notifications.
WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT Port used by the SMTP server (commonly 587 for TLS).
WATCHTOWER_NOTIFICATION_EMAIL_USERNAME SMTP server username for authentication.
WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD SMTP server password for authentication.

Here’s how the updated docker-compose.yml file would look:

version: "3.8"

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: always
    environment:
      WATCHTOWER_CLEANUP: "true"
      WATCHTOWER_POLL_INTERVAL: "3600"
      WATCHTOWER_LABEL_ENABLE: "true"
      WATCHTOWER_DEBUG: "true"
      WATCHTOWER_NOTIFICATIONS: "email"
      WATCHTOWER_NOTIFICATION_EMAIL_FROM: "admin@example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_TO: "notify@example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER: "smtp.example.com"
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: "587"
      WATCHTOWER_NOTIFICATION_EMAIL_USERNAME: "your_email_username"
      WATCHTOWER_NOTIFICATION_EMAIL_PASSWORD: "your_email_password"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
I like to put my credentials in a separate environment file.

Once you run the Watchtower container for the first time, you'll receive an initial email confirming that the service is up and running.

Here's an example of what that email might look like:

After some time, as Watchtower analyzes your setup and scans the running containers, it will notify you if it detects any updates available for your containers.

These notifications are sent in real-time and look something like this:

This feature ensures you're always in the loop about potential updates without having to check manually.

Final thoughts

I’m really impressed by Watchtower and have been using it for a month now.

I recommend, if possible, to play around with it in an isolated environment first, that’s what I did before deploying it in my homelab.

The email notification feature is great, but my inbox now looks totally filled with Watchtower emails, so I might create a rule to manage them better. Overall, no complaints so far! I find it better than the Docker Compose method we discussed earlier.

Updating Docker Containers With Zero Downtime
A step by step methodology that can be very helpful in your day to day DevOps activities without sacrificing invaluable uptime.

What about you? What do you use to update your containers?

If you’ve tried Watchtower, share your experience, anything I should be mindful of?

Let us know in the comments!

Blogger

pwd command in Linux

by: Satoshi Nakamoto
Sat, 18 Jan 2025 10:27:48 +0530


The pwd command in Linux, short for Print Working Directory, displays the absolute path of the current directory, helping users navigate the file system efficiently.

It is one of the first commands you use when you start learning Linux. And if you are absolutely new, take advantage of this free course:

Learn the Basic Linux Commands in an Hour [With Videos]
Learn the basics of Linux commands in this crash course.

pwd command syntax

Like other Linux commands, pwd also follows this syntax.

pwd [OPTIONS]

Here, you have [OPTIONS], which are used to modify the default behavior of the pwd command. If you don't use any options with the pwd command, it will show the physical path of the current working directory by default.

Unlike many other Linux commands, pwd does not come with many flags and has only two important flags:

Option Description
-L Displays the logical current working directory, including symbolic links.
-P Displays the physical current working directory, resolving symbolic links.
--help Displays help information about the pwd command.
--version Outputs version information of the pwd command.

Now, let's take a look at the practical examples of the pwd command.

1. Display the current location

This is what the pwd command is famous for, giving you the name of the directory where you are located or from where you are running the command.

pwd
Display the current working directory

If you want to display logical paths and symbolic links, all you have to do is execute the pwd command with the -L flag as shown here:

pwd -L

To showcase its usage, I will need to go through multiple steps so stay with me. First, go to the tmp directory using the cd command as shown here:

cd /tmp

Now, let's create a symbolic link which is pointing to the /var/log directory:

ln -s /var/log log_link

Finally, change your directory to log_link and use the pwd command with the -L flag:

pwd -L
Display the logical path including symbolic links

In the above steps, I went to the /tmp directory and then created a symbolic link which points to a specific location (/var/log) and then I used the pwd command and it successfully showed me the symbolic link.

The pwd command is one of the ways to resolve symbolic links. Meaning, you'll see the destination directory where soft link points to.

Use the -P flag:

pwd -P

I am going to use the symbolic link which I had created in the 2nd example. Here's what I did:

  • Navigate to /tmp.
  • Create a symbolic link (log_link) pointing to /var/log.
  • Change into the symbolic link (cd log_link)

Once you perform all the steps, you can check the real path of the symbolic link:

pwd -P
Follow symbolic link using the pwd command

4. Use pwd command in shell scripts

To get the current location in a bash shell script, you can store the value of the pwd command in a variable and later on print it as shown here:

current_dir=$(pwd)
echo "You are in $current_dir"

Now, if you execute this shell script in your home directory like I did, you will get similar output to mine:

Use the pwd command in the shell script

Bonus: Know the previous working directory

This is not exactly the use of the pwd command but it is somewhat related and interesting. There is an environment variable in Linux called OLDPWD which stores the previous working directory path.

This means you can get the previous working directory by printing the value of this environment variable:

echo "$OLDPWD"
know the previous working directory

Conclusion

This was a quick tutorial on how you can use the pwd command in Linux where I went through syntax, options, and some practical examples of it.

I hope you will find them helpful. If you have any queries or suggestions, leave us a comment.

by: Abhishek Prakash
Wed, 15 Jan 2025 18:28:50 +0530


This is the first newsletter of the year 2025. I hope expanding your Linux knowledge is one of your New Year's resolution, too. I am looking to learn and use Ansible in homelab setup. What's yours?

The focus of Linux Handbook in 2025 will be on self-hosting. You'll see more tutorials and articles on open source software you can self host on your cloud server or your home lab.

Of course, we'll continue to create new content on Kubernetes, Terraform, Ansible and other DevOps tools.

Here are the other highlights of this edition of LHB Linux Digest:

  • Extraterm terminal
  • File descriptors
  • Self hosting mailing list manager
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.

As a tech-enthusiast content creator, I'm always on the lookout for innovative ways to connect with my audience and share my passion for technology and self-sufficiency.

But as my newsletter grew in popularity, I found myself struggling with the financial burden of relying on external services like Mailgun - a problem many creators face when trying to scale their outreach efforts without sacrificing quality.

That's when I discovered Listmonk, a free and open-source mailing list manager that not only promises high performance but also gives me complete control over my data.

In this article, I'll walk you through how I successfully installed and deployed Listmonk locally using Docker, sharing my experiences and lessons learned along the way.

I used Linode's cloud server to test the scenario. You may try either of Linode or DigitalOcean or your own servers.

Customer Referral Landing Page - $100
Cut Your Cloud Bills in Half Deploy more with Linux virtual machines, global infrastructure, and simple pricing. No surprise bills, no lock-in, and the

Get started on Linode with a $100, 60-day credit for new users.

DigitalOcean – The developer cloud
Helping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.

Get started on DigitalOcean with a $100, 60-day credit for new users.

Prerequisites

Before diving into the setup process, make sure you have the following:

  • Docker and Docker Compose installed on your server.
  • A custom domain that you want to use for Listmonk.
  • Basic knowledge of shell commands and editing configuration files.

If you are absolutely new to Docker, we have a course just for you:

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Step 1: Set up the project directory

The first thing you need to do is create the directory where you'll store all the necessary files for Listmonk, I like an organized setup (helps in troubleshooting).

In your terminal, run:

mkdir listmonk
cd listmonk
creating listmonk directory

This will set up a dedicated directory for Listmonk’s files.

Step 2: Create the Docker compose file

Listmonk has made it incredibly easy to get started with Docker. Their official documentation provides a detailed guide and even a sample docker-compose.yml file to help you get up and running quickly.

Download the sample file to the current directory:

curl -LO https://github.com/knadh/listmonk/raw/master/docker-compose.yml
downloading sample docker-compose.yml file from listmonk

Here is the sample docker-compose.yml file, I tweaked some default environment variables:

💡
It's crucial to keep your credentials safe! Store them in a separate .env file, not hardcoded in your docker-compose.yml. I know, I know, I did it for this tutorial... but you're smarter than that, right? 😉
editing the environment variables in sample docker-compose.yml

For most users, this setup should be sufficient but you can always tweak settings to your own needs.

then run the container in the background:

docker compose up -d
running listmonk containers

Once you've run these commands, you can access Listmonk by navigating to http://localhost:9000 in your browser.

Setting up SSL

By default, Listmonk runs over HTTP and doesn’t include built-in SSL support. It is kinda important if you are running any service these days. So the next thing we need to do is to set up SSL support.

While I personally prefer using Cloudflare Tunnels for SSL and remote access, this tutorial will focus on Caddy for its straightforward integration with Docker.

Start by creating a folder named caddy in the same directory as your docker-compose.yml file:

mkdir caddy

Inside the caddy folder, create a file named Caddyfile with the following content:th the following contents:

listmonk.example.com {
    reverse_proxy app:9000
}

Replace listmonk.example.com with your actual domain name. This tells Caddy to proxy requests from your domain to the Listmonk service running on port 9000.

creating caddyfile

Ensure your domain is correctly configured in DNS. Add an A record pointing to your server's IP address (in my case, the Linode server's IP).

If you’re using Cloudflare, set the proxy status to DNS only during the initial setup to let Caddy handle SSL certificates.

creating a dns record for listmonk

Next, add the Caddy service to your docker-compose.yml file. Here’s the configuration to include:

  caddy:
    image: caddy:latest
    restart: unless-stopped
    container_name: caddy
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile
      - ./caddy/caddy_data:/data
      - ./caddy/caddy_config:/config
    networks:
      - listmonk
adding caddy service in docker-compose file

This configuration sets up Caddy to handle HTTP (port 80) and HTTPS (port 443) traffic, automatically obtain SSL certificates, and reverse proxy requests to the Listmonk container.

Finally, restart your containers to apply the new settings:

docker-compose restart

Once the containers are up and running, navigate to your domain (e.g., https://listmonk.example.com) in a browser.

Caddy will handle the SSL certificate issuance and proxy the traffic to Listmonk seamlessly.

Step 3: Accessing Listmonk webUI

Once Listmonk is up and running, it’s time to access the web interface and complete the initial setup.

Open your browser and navigate to your domain or IP address where Listmonk is hosted. If you’ve configured HTTPS, the URL should look something like this:

https://listmonk.yourdomain.com

and you’ll be greeted with the login page. Click Login to proceed.

Creating the admin user

On the login screen, you’ll be prompted to create an administrator account. Enter your email address, a username, and a secure password, then click Continue.

creating admin account for listmonk

This account will serve as the primary admin for managing Listmonk.

Configure general settings

Once logged in, navigate to Settings > Settings in the left sidebar. Under the General tab, customize the following:

  • Site Name: Enter a name for your Listmonk instance.
  • Root URL: Replace the default http://localhost:9000 with your domain (e.g., https://listmonk.yourdomain.com).
  • Admin Email: Add an email address for administrative notifications.

Click Save to apply these changes.

editing general settings

Configure SMTP settings

To send emails, you’ll need to configure SMTP settings:

  1. Click on the SMTP tab in the settings.
  2. Fill in the details:
    • Host: smtp.emailhost.com
    • Port: 465
    • Auth Protocol: Login
    • Username: Your email address
    • Password: Your email password (or Gmail App password, generated via Google’s security settings)
    • TLS: SSL/TLS
  3. Click Save to confirm the settings.
adding smtp settings to send emails

Create a new campaign list

Now, let’s create a list to manage your subscribers:

  1. Go to All Lists in the left sidebar and click + New.
  2. Give your list a name, set it to Public, and choose between Single Opt-In or Double Opt-In.
  3. Add a description, then click Save.
creating a test newsletter

Your newsletter subscription form will now be available at:

https://listmonk.yourdomain.com/subscription/form

newsletter subscribe page

With everything set up and running smoothly, it’s time to put Listmonk to work.

You can easily import your existing subscribers, customize the look and feel of your emails, and even change the logo to match your brand.

Final thoughts

And that’s it! You’ve successfully set up Listmonk, configured SMTP, and created your first campaign list. From here, you can start sending newsletters and growing your audience.

I’m currently testing Listmonk for my own newsletter solution on my website, and while it’s a robust solution, I’m curious to see how it performs in a production environment.

That said, I’m genuinely impressed by the thought and effort that Kailash Nadh and the contributors have put into this software, it’s a remarkable achievement.

For any questions or challenges you encounter, the Listmonk GitHub page is an excellent resource and the developers are highly responsive.

Finally, I’d love to hear your thoughts! Share your feedback, comments, or suggestions below. I’d love to hear about your experience with Listmonk and how you’re using it for your projects.

Happy emailing! 📨

https://linuxhandbook.com/content/images/2025/01/listmon-self-hosting.png

File descriptors are a core concept in Linux and other Unix-like operating systems. They provide a way for programs to interact with files, devices, and other input/output (I/O) resources.

Simply put, a file descriptor is like a "ticket" or "handle" that a program uses to access these resources. Every time a program opens a file or creates an I/O resource (like a socket or pipe), the operating system assigns it a unique number called a file descriptor.

This number allows the program to read, write, or perform other operations on the resource.

And as we all know, in Linux, almost everything is treated as a file—whether it's a text file, a keyboard input, or even network communication. File descriptors make it possible to handle all these resources in a consistent and efficient way.

What Are File Descriptors?

A file descriptor is a non-negative integer assigned by your operating system whenever a program opens a file or another I/O resource. It acts as an identifier that the program uses to interact with the resource.

For example:

  • When you open a text file, the operating system assigns it a file descriptor (e.g., 3).
  • If you open another file, it gets the next available file descriptor (e.g., 4).

These numbers are used internally by the program to perform operations like reading from or writing to the resource.

This simple mechanism allows programs to interact with different resources without needing to worry about how these resources are implemented underneath.

For instance, whether you're reading from a keyboard or writing to a network socket, you use file descriptors in the same way!

The three standard file descriptors

Every process in Linux starts with three predefined file descriptors: Standard Input (stdin), Standard Output (stdout), and Standard Error (stderr).

Here's a brief summary of their use:

Descriptor Integer Value Symbolic Constant Purpose
stdin 0 STDIN_FILENO Standard input (keyboard input by default)
stdout 1 STDOUT_FILENO Standard output (screen output by default)
stderr 2 STDERR_FILENO Standard error (error messages by default)

Now, let's address each file descriptor with details.

1. Standard Input (stdin)- Descriptor: 0

The purpose of the standard input stream is to receive input data. By default, it reads input from the keyboard unless redirected to another source like a file or pipe. Programs use stdin to accept user input interactively or process data from external sources.

When you type something into the terminal and press Enter, the data is sent to the program's stdin. This stream can also be redirected to read from files or other programs using shell redirection operators (<).

One simple example of stdin would be a script that takes input from the user and prints it:

#!/bin/bash

# Prompt the user to enter their name
echo -n "Enter your name: "

# Read the input from the user
read name

# Print a greeting message
echo "Hello, $name!"

Here's what the output looks like:

But there is another way of using the input stream–redirecting the input itself. You can create a text file and redirect the input stream.

For example, here I have created a sample text file named input.txt which contains my name Satoshi. Later I redirected the input stream using <:

As you can see, rather than waiting for my input, it took data from the text file and we somewhat automated this.

2. Standard Output (stdout)- Descriptor: 1

The standard output stream is used for displaying normal output generated by programs. By default, it writes output to the terminal screen unless redirected elsewhere.

In simple terms, programs use stdout to print results or messages. This stream can be redirected to write output to files or other programs using shell operators (> or |).

Let's take a simple script that prints a greeting message:

#!/bin/bash

# Print a message to standard output
echo "This is standard output."

Here's the simple output (nothing crazy but a decent example):

stdout sample script

Now, if I want to redirect the output to a file, rather than showing it on the terminal screen, then I can use > as shown here:

./stdout.sh > output.txt
change output datastream

Another good example can be the redirecting output of a command to a text file:

ls > output.txt
Redirect output of command to text file

3. Standard Error (stderr)- Descriptor: 2

The standard error stream is used for displaying error messages and diagnostics. It is separate from stdout so that errors can be handled independently of normal program output.

For better understanding, I wrote a script that will trigger the stderr signal as I have used the exit 1 to mimic a faulty execution:

#!/bin/bash

# Print a message to standard output
echo "This is standard output."

# Print an error message to standard error
echo "This is an error message." >&2

# Exit with a non-zero status to indicate an error
exit 1

But if you were to execute this script, it would simply print "This is an error message." To understand better, you can redirect the output and error to different files.

For example, here, I have redirected the error message to stderr.log and the normal output will go into stdout.log:

./stderr.sh > stdout.log 2> stderr.log

Bonus: Types of limits on file descriptors

Linux kernel puts a limit on the number of file descriptors a process can use. These limits help manage system resources and prevent any single process from using too many. There are different types of limits, each serving a specific purpose.

  • Soft Limits: The default maximum number of file descriptors a process can open. Users can temporarily increase this limit up to the hard limit for their session.
  • Hard Limits: The absolute maximum number of file descriptors a process can open. Only the system admin can increase this limit to ensure system stability.
  • Process-Level Limits: Each process has its own set of file descriptor limits, inherited from its parent process, to prevent any single process from overusing resources.
  • System-Level Limits: The total number of file descriptors available across all processes on the system. This ensures fairness and prevents global resource exhaustion.
  • User-Level Limits: Custom limits set for specific users or groups to allocate resources differently based on their needs.

Wrapping Up...

In this explainer, I went through what file descriptors are in Linux and shared some practical examples to explain their function. I tried to cover the types of limits in detail but then I had to drop the "detail" to stick to the main idea of this article.

But if you want, I can surely write a detailed article on the types of limits on file descriptors. Also, if you have any questions or suggestions, leave us a comment.

https://linuxhandbook.com/content/images/2025/01/file-descriptor-in-linux.png
I don’t like my prompt, i want to change it. it has my username and host, but the formatting is not what i want. This blog will get you started quickly on doing exactly that.

This is my current prompt below:

To change the prompt you will update .bashrc and set the PS1 environment variable to a new value.

Here is a cheatsheet of the prompt options:

You can use these placeholders for customization:

\u – Username
\h – Hostname
\w – Current working directory
\W – Basename of the current working directory
\$ – Shows $ for a normal user and # for the root user
\t – Current time (HH:MM:SS)
\d – Date (e.g., "Mon Jan 05")
\! – History number of the command
\# – Command number

I want to change my prompt to say
Here is my new prompt I am going to use:

export PS1="linuxhint@mybox \w: "

Can you guess what that does? Yes for my article writing this is exactly what i want. Here is the screenshot:

A lot of people will want the Username, Hostname, for my example i don’t! But you can use \u and \h for that. I used \w to show what directory i am in. You can also show date and time, etc.

You can also play with setting colors in the prompt with these variables:

Foreground Colors:
\e[30m – Black
\e[31m – Red
\e[32m – Green
\e[33m – Yellow
\e[34m – Blue
\e[35m – Magenta
\e[36m – Cyan
\e[37m – White

Background Colors:
\e[40m – Black
\e[41m – Red
\e[42m – Green
\e[43m – Yellow
\e[44m – Blue
\e[45m – Magenta
\e[46m – Cyan
\e[47m – White
Reset Color:
\e[0m – Reset to default

Here is my colorful version. The backslashes are primarily needed to ensure proper formatting of the prompt and avoid breaking its functionality.

export PS1="\[\e[35m\]linuxhint\[\e[0m\]@\[\e[34m\]mybox\[\e[0m\] \[\e[31m\]\w\[\e[0m\]: "


This uses Magenta, Blue and Red coloring for different parts of the prompt.

Conclusion

You can see how to customize your bash prompt with PS1 environment in Ubuntu. Hope this helps you be happy with your environment in linux.

image

In Bash version 4, associative arrays were introduced, and from that point, they solved my biggest problem with arrays in Bash—indexing. Associative arrays allow you to create key-value pairs, offering a more flexible way to handle data compared to indexed arrays.

In simple terms, you can store and retrieve data using string keys, rather than numeric indices as in traditional indexed arrays.

But before we begin, make sure you are running the bash version 4 or above by checking the bash version:

echo $BASH_VERSION
check the bash version

If you are running bash version 4 or above, you can access the associative array feature.

Using Associative arrays in bash

Before I walk you through the examples of using associative arrays, I would like to mention the key differences between Associative and indexed arrays:

Feature Indexed Arrays Associative Arrays
Index Type Numeric (e.g., 0, 1, 2) String (e.g., "name", "email")
Declaration Syntax declare -a array_name declare -A array_name
Access Syntax ${array_name[index]} ${array_name["key"]}
Use Case Sequential or numeric data Key-value pair data

Now, let's take a look at what you are going to learn in this tutorial on using Associative arrays:

  • Declaring an Associative array
  • Assigning values to an array
  • Accessing values of an array
  • Iterating over an array's elements

1. How to declare an Associative array in bash

To declare an associative array in bash, all you have to do is use the declare command with the -A flag along with the name of the array as shown here:

declare -A Array_name

For example, if I want to declare an associative array named LHB, then I would use the following command:

declare -A LHB
declare associative array in bash

2. How to add elements to an Associative array

There are two ways you can add elements to an Associative array: You can either add elements after declaring an array or you can add elements while declaring an array. I will show you both.

Adding elements after declaring an array

This is quite easy and recommended if you are getting started with bash scripting. In this method, you add elements to the already declared array one by one.

To do so, you have to use the following syntax:

my_array[key1]="value1"

In my case, I have assigned two values using two key pairs to the LHB array:

LHB[name]="Satoshi"
LHB[age]="25"
Assign values to the associative array

Adding elements while declaring an array

If you want to add elements while declaring the associative array itself, you can follow the given command syntax:

declare -A my_array=(
    [key1]="value1"
    [key2]="value2"
    [key3]="value3"
)

For example, here, I created a new associated array and added three elements:

declare -A myarray=(
    [Name]="Satoshi"
    [Age]="25"
    [email]="satoshi@xyz.com"
)
Assign values to the associative array while creating array

3. Create a read-only Associative array

If you want to create a read-only array (for some reason), you'd have to use the -r flag while creating an array:

declare -rA my_array=(
    [key1]="value1"
    [key2]="value2"
    [key3]="value3"
)

Here, I created a read-only Associative array named MYarray:

declare -rA MYarray=(
    [City]="Tokyo"
    [System]="Ubuntu"
    [email]="satoshi@xyz.com"
)

Now, if I try to add a new element to this array, it will throw an error saying "MYarray: read-only variable":

Can not add additional elements to read-only associative array

4. Print keys and values of an Associative array

If you want to print the value of a specific key (similar to printing the value of a specific indexed element), you can simply use the following syntax for that purpose:

echo ${my_array[key1]}

For example, if I want to print the value of email key from the myarray array, I would use the following:

echo ${myarray[email]}
Print value of a key in associative array

The method of printing all the keys and elements of an Associative array is mostly the same. To print all keys at once, use ${!my_array[@]} which will retrieve all the keys in the associative array:

echo "Keys: ${!my_array[@]}"

If I want to print all the keys of myarray, then I would use the following:

echo "Keys: ${!myarray[@]}"
Print keys at once

On the other hand, if you want to print all the values of an Associative array, use ${my_array[@]} as shown here:

echo "Values: ${my_array[@]}"

To print values of the myarray, I used the below command:

echo "Values: ${myarray[@]}"
Print values of associate array at once

5. Find the Length of the Associative Array

The method for finding the length of the associative array is exactly the same as you do with the indexed arrays. You can use the ${#array_name[@]} syntax to find this count as shown here:

echo "Length: ${#my_array[@]}"

If I want to find a length of myarray array, then I would use the following:

echo "Length: ${#myarray[@]}"
Find length of associative array

6. Iterate over an Associative array

Iterating over an associative array allows you to process each key-value pair. In Bash, you can loop through:

  • The keys using ${!array_name[@]}.
  • The corresponding values using ${array_name[$key]}.

This is useful for tasks like displaying data, modifying values, or performing computations. For example, here I wrote a simple for loop to print the keys and elements accordingly:

for key in "${!myarray[@]}"; do
    echo "Key: $key, Value: ${myarray[$key]}"
done
Iterate over associative array

7. Check if a key exists in the Associative array

Sometimes, you need to verify whether a specific key exists in an associative array. Bash provides the -v operator for this purpose.

Here, I wrote a simple if else script that uses the -v flag to check if a key exists in the myarray array:

if [[ -v myarray["username"] ]]; then
    echo "Key 'username' exists"
else
    echo "Key 'username' does not exist"
fi
check if a key pair exist in associative array

8. Clear Associative array

If you want to remove specific keys from the associative array, then you can use the unset command along with a key you want to remove:

unset my_array["key1"]

For example, if I want to remove the email key from the myarray array, then I will use the following:

unset myarray["email"]
Remove key pairs from associative array

9. Delete the Associative array

If you want to delete the associative array, all you have to do is use the unset command along with the array name as shown here:

unset my_array

For example, if I want to delete the myarray array, then I would use the following:

unset myarray
delete associative array

Wrapping Up...

In this tutorial, I went through the basics of the associative array with multiple examples. I hope you will find this guide helpful.

If you have any questions or suggestions, leave us a comment.

https://linuxhandbook.com/content/images/2024/12/associative-array-bash.png
In this post I will show you how to install the ZSH shell on Rocky Linux. ZSH is an alternate shell that some people prefer instead of BASH shell. Some people say ZSH has better auto-completion, theme support, and plugin system. If you want to give ZSH a try its quite easy to install and give it a try. This post is focused on the Rocky Linux user and how to install ZSH and get started with its usage.

Before installing anything new, it’s good practice to update your system packages:

sudo dnf update

It might be easier than you think to install and use a new shell. First install the package like this:

sudo dnf install zsh

Now you can enter a session of zsh be invoking the shell’s name ‘zsh’.

zsh

You might not be sure if it succeeded, how you can verify which sell you are using now?

echo $0

You should see some output like the following:

[root@mypc]~# echo $0:
zsh:
[root@mypc]~#

ok good, if it says bash or something other than zsh you have a problem with your setup. Now lets run a couple basic commands

Example 1: Print all numbers from 1 to 10. In Zsh, you can use a for loop to do this:

for i in {1..10}; do echo $i; done

Example 2: Create a variable to store your username and then print it. You can use the $USER environment variable which automatically contains your username:

my_username=$USER
echo $my_username

Example 3: Echo a string that says “I love $0”. The $0 variable in a shell script or interactive shell session refers to the name of the script or shell being run. Here’s how to use it:

echo "I love $0"

When run in an interactive Zsh session, this will output something like “I love -zsh” if you’re in a login shell, or “I love zsh” if not.

Conclusion

Switching shells in a linux system is easy due to the modularity. Now that you see how to install ZSH you may like it and decide to use it as your preferred shell.

Even on Linux, you can enjoy gaming and interact with fellow gamers via Steam. As a Linux gamer, Steam is a handy game distribution platform that allows you to install different games, including purchased ones. Moreover, with Steam, you can connect with other games and play multiplayer titles.Steam is a cross-platform game distribution platform that offers games the option of purchasing and installing games on any device through a Steam account. This post gives different options for installing Steam on Ubuntu 24.04.

Different Methods of Installing Steam on Ubuntu 24.04

No matter the Ubuntu version that you use, there are three easy ways of installing Steam. For our guide, we are working on Ubuntu 24.04, and we’ve detailed the steps to follow for each method. Take a look!

Method 1: Install Steam via Ubuntu Repository

On your Ubuntu, Steam can be installed through the multiverse repository by following the steps below.
Step 1: Add the Multiverse Repository
The multiverse repository isn’t added on Ubuntu by default but executing the following command will add it.

$ sudo add-apt-multiverse

steam-1.png

Step 2: Refresh the Package Index
After adding the new repository, we must refresh the package index before we can install Steam.

$ sudo apt update

steam-2.png

Step 3: Install Steam
Lastly, install Steam from the repository by running the APT command below.

$ sudo apt install steam

steam-3.png

Method 2: Install Steam as a Snap

Steam is available as a snap package and you can install it by accessing the Ubuntu 24.04 App Center or by installing via command-line.
To install it via GUI, use the below steps.

Step 1: Search for Steam on App Center

On your Ubuntu, open the App Center and search for “Steam” in the search box. Different results will open and the first one is what we want to install.

steam-5.png

Step 2: Install Steam

On the search results page, click on Steam to open a window showing a summary of its information. Locate the green Install button and click on it.

steam-6.png

You will get prompted to enter your password before the installation can begin.

steam-7.png

Once you do so, a window showing the progress bar of the installation process will appear. Once the process completes, you will have Steam installed and ready for use on your Ubuntu 24.04.

Alternatively, if you prefer using the command-line option to install Steam from App Center, you can do so using the snap command. Specify the package when running your command as shown below.

$ sudo snap install steam

steam-8.png

On the output, the download and installation progress will be shown and once it completes, Steam will be available from your applications. You can open it and set it up for your gaming.

Method 3: Download and Install the Steam Package

Steam releases a .deb package for Linux and by downloading it, you can use it to install Steam. Unlike the previous methods, this method requires downloading the Steam package from its website using command line utilities such as wget or curl.

Step 1: Install wget

To download the Steam .deb package, we will use wget. You can skip this step if you already have it installed. Otherwise, execute the below command.

$ sudo apt install wget

steam-9.png

Step 2: Download the Steam Package

With wget installed, run the following command to download the Steam .deb package.

$ wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb

steam-10.png

Step 3: Install Steam

To install the .deb package, we will use the dpkg command below.

$ sudo dpkg -i steam.deb

steam-11.png

Once Steam completes installing, verify that you can access it by searching for it on your Ubuntu 24.04.

steam-12.png

With that, you now have Steam installed on Ubuntu.

Conclusion

Steam is handy tool for any gamer and its cross-platform nature means you can install it on Ubuntu 24.04. we’ve given three installation methods you can use depending on your preference. Once you’ve installed Steam, configure it and create your account to start utilizing it. Happy gaming!

Proxmox VE 8 is one of the best open-source and free Type-I hypervisors out there for running QEMU/KVM virtual machines (VMs) and LXC containers. It has a nice web management interface and a lot of features.

One of the most amazing features of Proxmox VE is that it can passthrough PCI/PCIE devices (i.e. an NVIDIA GPU) from your computer to Proxmox VE virtual machines (VMs). The PCI/PCIE passthrough is getting better and better with newer Proxmox VE releases. At the time of this writing, the latest version of Proxmox VE is Proxmox VE v8.1 and it has great PCI/PCIE passthrough support.

In this article, I am going to show you how to configure your Proxmox VE 8 host/server for PCI/PCIE passthrough and configure your NVIDIA GPU for PCIE passthrough on Proxmox VE 8 virtual machines (VMs).

 

Table of Contents

  1. Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard
  2. Installing Proxmox VE 8
  3. Enabling Proxmox VE 8 Community Repositories
  4. Installing Updates on Proxmox VE 8
  5. Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard
  6. Enabling IOMMU on Proxmox VE 8
  7. Verifying if IOMMU is Enabled on Proxmox VE 8
  8. Loading VFIO Kernel Modules on Proxmox VE 8
  9. Listing IOMMU Groups on Proxmox VE 8
  10. Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM)
  11. Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8
  12. Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8
  13. Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8
  14. Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM)
  15. Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)?
  16. Conclusion
  17. References

 

Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard

Before you can install Proxmox VE 8 on your computer/server, you must enable the hardware virtualization feature of your processor from the BIOS/UEFI firmware of your motherboard. The process is different for different motherboards. So, if you need any assistance in enabling hardware virtualization on your motherboard, read this article.

 

Installing Proxmox VE 8

Proxmox VE 8 is free to download, install, and use. Before you get started, make sure to install Proxmox VE 8 on your computer. If you need any assistance on that, read this article.

 

Enabling Proxmox VE 8 Community Repositories

Once you have Proxmox VE 8 installed on your computer/server, make sure to enable the Proxmox VE 8 community package repositories.

By default, Proxmox VE 8 enterprise package repositories are enabled and you won’t be able to get/install updates and bug fixes from the enterprise repositories unless you have bought Proxmox VE 8 enterprise licenses. So, if you want to use Proxmox VE 8 for free, make sure to enable the Proxmox VE 8 community package repositories to get the latest updates and bug fixes from Proxmox for free.

 

Installing Updates on Proxmox VE 8

Once you’ve enabled the Proxmox VE 8 community package repositories, make sure to install all the available updates on your Proxmox VE 8 server.

 

Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard

The IOMMU configuration is found in different locations in different motherboards. To enable IOMMU on your motherboard, read this article.

 

Enabling IOMMU on Proxmox VE 8

Once the IOMMU is enabled on the hardware side, you also need to enable IOMMU from the software side (from Proxmox VE 8).

To enable IOMMU from Proxmox VE 8, you have the add the following kernel boot parameters:

Processor Vendor Kernel boot parameters to add
Intel intel_iommu=on, iommu=pt
AMD iommu=pt

 

To modify the kernel boot parameters of Proxmox VE 8, open the /etc/default/grub file with the nano text editor as follows:

$ nano /etc/default/grub

 

At the end of the GRUB_CMDLINE_LINUX_DEFAULT, add the required kernel boot parameters for enabling IOMMU depending on the processor you’re using.

As I am using an AMD processor, I have added only the kernel boot parameter iommu=pt at the end of the GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/default/grub file.

 

Now, update the GRUB boot configurations with the following command:

$ update-grub2

 

Once the GRUB boot configurations are updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Verifying if IOMMU is Enabled on Proxmox VE 8

To verify whether IOMMU is enabled on Proxmox VE 8, run the following command:

$ dmesg | grep -e DMAR -e IOMMU

 

If IOMMU is enabled, you will see some outputs confirming that IOMMU is enabled.

If IOMMU is not enabled, you may not see any outputs.

 

You also need to have the IOMMU Interrupt Remapping enabled for PCI/PCIE passthrough to work.

To check if IOMMU Interrupt Remapping is enabled on your Proxmox VE 8 server, run the following command:

$ dmesg | grep 'remapping'

 

As you can see, IOMMU Interrupt Remapping is enabled on my Proxmox VE 8 server.

NOTE: Most modern AMD and Intel processors will have IOMMU Interrupt Remapping enabled. If for any reason, you don’t have IOMMU Interrupt Remapping enabled, there’s a workaround. You have to enable Unsafe Interrupts for VFIO. Read this article for more information on enabling Unsafe Interrupts on your Proxmox VE 8 server.

 

Loading VFIO Kernel Modules on Proxmox VE 8

The PCI/PCIE passthrough is done mainly by the VFIO (Virtual Function I/O) kernel modules on Proxmox VE 8. The VFIO kernel modules are not loaded at boot time by default on Proxmox VE 8. But, it’s easy to load the VFIO kernel modules at boot time on Proxmox VE 8.

First, open the /etc/modules-load.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modules-load.d/vfio.conf

 

Type in the following lines in the /etc/modules-load.d/vfio.conf file.

vfio

vfio_iommu_type1

vfio_pci

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes.

 

Now, update the initramfs of your Proxmox VE 8 installation with the following command:

$ update-initramfs -u -k all

 

Once the initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Once your Proxmox VE 8 server boots, you should see that all the required VFIO kernel modules are loaded.

$ lsmod | grep vfio

 

Listing IOMMU Groups on Proxmox VE 8

To passthrough PCI/PCIE devices on Proxmox VE 8 virtual machines (VMs), you will need to check the IOMMU groups of your PCI/PCIE devices quite frequently. To make checking for IOMMU groups easier, I decided to write a shell script (I got it from GitHub, but I can’t remember the name of the original poster) in the path /usr/local/bin/print-iommu-groups so that I can just run print-iommu-groups command and it will print the IOMMU groups on the Proxmox VE 8 shell.

 

First, create a new file print-iommu-groups in the path /usr/local/bin and open it with the nano text editor as follows:

$ nano /usr/local/bin/print-iommu-groups

 

Type in the following lines in the print-iommu-groups file:

#!/bin/bash

shopt -s nullglob

for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do

echo "IOMMU Group ${g##*/}:"

for d in $g/devices/*; do

echo -e "\t$(lspci -nns ${d##*/})"

done;

done;

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes to the print-iommu-groups file.

 

Make the print-iommu-groups script file executable with the following command:

$ chmod +x /usr/local/bin/print-iommu-groups

 

Now, you can run the print-iommu-groups command as follows to print the IOMMU groups of the PCI/PCIE devices installed on your Proxmox VE 8 server:

$ print-iommu-groups

 

As you can see, the IOMMU groups of the PCI/PCIE devices installed on my Proxmox VE 8 server are printed.

 

Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM)

To passthrough a PCI/PCIE device to a Proxmox VE 8 virtual machine (VM), it must be in its own IOMMU group. If 2 or more PCI/PCIE devices share an IOMMU group, you can’t passthrough any of the PCI/PCIE devices of that IOMMU group to any Proxmox VE 8 virtual machines (VMs).

So, if your NVIDIA GPU and its audio device are on its own IOMMU group, you can passthrough the NVIDIA GPU to any Proxmox VE 8 virtual machines (VMs).

On my Proxmox VE 8 server, I am using an MSI X570 ACE motherboard paired with a Ryzen 3900X processor and Gigabyte RTX 4070 NVIDIA GPU. According to the IOMMU groups of my system, I can passthrough the NVIDIA RTX 4070 GPU (IOMMU Group 21), RTL8125 2.5Gbe Ethernet Controller (IOMMU Group 20), Intel I211 Gigabit Ethernet Controller (IOMMU Group 19), a USB 3.0 controller (IOMMU Group 24), and the Onboard HD Audio Controller (IOMMU Group 25).

$ print-iommu-groups

 

As the main focus of this article is configuring Proxmox VE 8 for passing through the NVIDIA GPU to Proxmox VE 8 virtual machines, the NVIDIA GPU and its Audio device must be in its own IOMMU group.

 

Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8

To passthrough a PCI/PCIE device on a Proxmox VE 8 virtual machine (VM), you must make sure that Proxmox VE forces it to use the VFIO kernel module instead of its original kernel module.

To find out the kernel module your PCI/PCIE devices are using, you will need to know the vendor ID and device ID of these PCI/PCIE devices. You can find the vendor ID and device ID of the PCI/PCIE devices using the print-iommu-groups command.

$ print-iommu-groups

 

For example, the vendor ID and device ID of my NVIDIA RTX 4070 GPU is 10de:2786 and it’s audio device is 10de:22bc.

 

To find the kernel module a PCI/PCIE device 10de:2786 (my NVIDIA RTX 4070 GPU) is using, run the lspci command as follows:

$ lspci -v -d 10de:2786

 

As you can see, my NVIDIA RTX 4070 GPU is using the nvidiafb and nouveau kernel modules by default. So, they can’t be passed to a Proxmox VE 8 virtual machine (VM) at this point.

 

The Audio device of my NVIDIA RTX 4070 GPU is using the snd_hda_intel kernel module. So, it can’t be passed on a Proxmox VE 8 virtual machine at this point either.

$ lspci -v -d 10de:22bc

 

So, to passthrough my NVIDIA RTX 4070 GPU and its audio device on a Proxmox VE 8 virtual machine (VM), I must blacklist the nvidiafb, nouveau, and snd_hda_intel kernel modules and configure my NVIDIA RTX 4070 GPU and its audio device to use the vfio-pci kernel module.

 

Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8

To blacklist kernel modules on Proxmox VE 8, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the kernel modules nouveau, nvidiafb, and snd_hda_intel kernel modules (to passthrough NVIDIA GPU), add the following lines in the /etc/modprobe.d/blacklist.conf file:

blacklist nouveau

blacklist nvidiafb

blacklist snd_hda_intel

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/blacklist.conf file.

 

Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8

To configure the PCI/PCIE device (i.e. your NVIDIA GPU) to use the VFIO kernel module, you need to know their vendor ID and device ID.

In this case, the vendor ID and device ID of my NVIDIA RTX 4070 GPU and its audio device are 10de:2786 and 10de:22bc.

 

To configure your NVIDIA GPU to use the VFIO kernel module, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure your NVIDIA GPU and its audio device with the <vendor-id>:<device-id> 10de:2786 and 10de:22bc (let’s say) respectively to use the VFIO kernel module, add the following line to the /etc/modprobe.d/vfio.conf file.

options vfio-pci ids=10de:2786,10de:22bc

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/vfio.conf file.

 

Now, update the initramfs of Proxmove VE 8 with the following command:

$ update-initramfs -u -k all

 

Once initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.

 

Once your Proxmox VE 8 server boots, you should see that your NVIDIA GPU and its audio device (10de:2786 and 10de:22bc in my case) are using the vfio-pci kernel module. Now, your NVIDIA GPU is ready to be passed to a Proxmox VE 8 virtual machine.

$ lspci -v -d 10de:2786

$ lspci -v -d 10de:22bc

 

Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM)

Now that your NVIDIA GPU is ready for passthrough on Proxmox VE 8 virtual machines (VMs), you can passthrough your NVIDIA GPU on your desired Proxmox VE 8 virtual machine and install the NVIDIA GPU drivers depending on the operating system that you’re using on that virtual machine as usual.

For detailed information on how to passthrough your NVIDIA GPU on a Proxmox VE 8 virtual machine (VM) with different operating systems installed, read one of the following articles:

  • How to Passthrough an NVIDIA GPU to a Windows 11 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Ubuntu 24.04 LTS Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a LinuxMint 21 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Debian 12 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to an Elementary OS 8 Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU to a Fedora 39+ Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU on an Arch Linux Proxmox VE 8 Virtual Machine (VM)
  • How to Passthrough an NVIDIA GPU on a Red Hat Enterprise Linux 9 (RHEL 9) Proxmox VE 8 Virtual Machine (VM)

 

Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)?

Even after trying everything listed in this article correctly, if PCI/PCIE passthrough still does not work for you, be sure to try out some of the Proxmox VE PCI/PCIE passthrough tricks and/or workarounds that you can use to get PCI/PCIE passthrough work on your hardware.

 

Conclusion

In this article, I have shown you how to configure your Proxmox VE 8 server for PCI/PCIE passthrough so that you can passthrough PCI/PCIE devices (i.e. your NVIDIA GPU) to your Proxmox VE 8 virtual machines (VMs). I have also shown you how to find out the kernel modules that you need to blacklist and how to blacklist them for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine. Finally, I have shown you how to configure your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to use the VFIO kernel modules, which is also an essential step for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine (VM).

 

References

  1. PCI(e) Passthrough – Proxmox VE
  2. PCI Passthrough – Proxmox VE
  3. The ultimate gaming virtual machine on proxmox – YouTube

Anyone can easily run multiple operating systems on one host simultaneously, provided they have VirtualBox installed. Even for Ubuntu 24.04, you can install VirtualBox and utilize it to run any supported operating system.The best part about VirtualBox is that it is open-source virtualization software, and you can install and use it anytime. Whether you are stuck on how to install VirtualBox on Ubuntu 24.04 or looking to advance with other operating systems on top of your host, this post gives you two easy methods.

Two Methods of Installing VirtualBox on Ubuntu 24.04

There are different ways of installing VirtualBox on Ubuntu 24.04. For instance, you can retrieve a stable VirtualBox version from Ubuntu’s repository or add Oracle’s VirtualBox repository to install a specific version. Which method to use will depend on your requirements, and we’ve discussed the methods in the sections below.

Method 1: Install VirtualBox via APT
The easiest way of installing VirtualBox on Ubuntu 24.04 is by sourcing it from the official Ubuntu repository using APT.
Below are the steps you should follow.
Step 1: Update the Repository
In every installation, the first step involves refreshing the source list to update the package index by executing the following command.

$ sudo apt update

Step 2: Install VirtualBox
Once you’ve updated your package index, the next task is to run the install command below to fetch and install the VirtualBox package.

$ sudo apt install virtualbox

Step 3: Verify the Installation
After the installation, use the following command to check the installed version. The output also confirms that you successfully installed VirtualBox on Ubuntu 24.04.

$ VboxManage --version

Method 2: Install VirtualBox from Oracle’s Repository
The previous method shows that we installed VirtualBox version 7.0.14. However, if you visit the VirtualBox website, depending on when you read this post, it’s likely that the version we’ve installed may not be the latest.

Although the older VirtualBox versions are okay, installing the latest version is always the better option as it contains all patches and fixes. However, to install the latest version, you must add Oracle’s repository to your Ubuntu before you can execute the install command.

Step 1: Install Prerequisites
All the dependencies you require before you can add the Oracle VirtualBox repository can be installed when you install the software-properties-common package.

$ sudo apt install software-properties-common

Step 2: Add GPG Keys
GPG keys help verify the authenticity of repositories before we can add them to the system. The Oracle repository is a third-party repository, and by installing the GPG keys, it will be checked for integrity and authenticity.
Here’s how you add the GPG keys.

$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -

You will receive an output on your terminal showing that the key has been downloaded and installed.
Step 3: Add Oracle’s VirtualBox Repository
Oracle has a VirtualBox repository for all supported Operating Systems. To fetch this repository and add it to your /etc/apt/sources.list.d/, execute the following command.

$ echo "deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list

The output shows that a new repository entry has been created from which we will source VirtualBox when we execute the install command.

Step 4: Install VirtualBox
With the repository added, let’s first refresh the package index by updating it.

$ sudo apt update

Next, specify which VirtualBox you want to install using the below syntax.

$ sudo apt install virtualbox-[version]

For instance, if the latest version when reading this post is version 7.1, you would replace version in the above command with 7.1.

However, ensure that the specified version is available on the VirtualBox website. Otherwise, you will get an error as you can’t install something that can’t be found.

Conclusion

VirtualBox is an effective way of running numerous Operating Systems on one host simultaneously. This post shares two methods of installing VirtualBox on Ubuntu 24.04. First, you can install it via APT by sourcing it from the Ubuntu repository. Alternatively, you can add the Oracle repository and specify a specific version number for the VirtualBox you want to install.

In recent years, support for PCI/PCIE (i.e. GPU passthrough) has improved a lot in newer hardware. So, the regular Proxmox VE PCI/PCIE and GPU passthrough guide should work in most new hardware. Still, you may face many problems passing through GPUs and other PCI/PCIE devices on a Proxmox VE virtual machine. There are many tweaks/fixes/workarounds for some of the common Proxmox VE GPU and PCI/PCIE passthrough problems.

In this article, I am going to discuss some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.

 

Table of Contents

  1. What to do if IOMMU Interrupt Remapping is not Supported?
  2. What to do if My GPU (or PCI/PCIE Device) is not in its own IOMMU Group?
  3. How do I Blacklist AMD GPU Drivers on Proxmox VE?
  4. How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?
  5. How do I Blacklist Intel GPU Drivers on Proxmox VE?
  6. How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?
  7. I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  8. I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  9. I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
  10. Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?
  11. Why Disable VGA Arbitration for the GPUs and How to Do It?
  12. What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?
  13. GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?
  14. What is AMD Vendor Reset Bug and How to Solve it?
  15. How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?
  16. What to do If Some Apps Crash the Proxmox VE Windows Virtual Machine?
  17. How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?.
  18. How to Update Proxmox VE initramfs?
  19. How to Update Proxmox VE GRUB Bootloader?
  20. Conclusion
  21. References

 

What to do If IOMMU Interrupt Remapping is not Supported?

For PCI/PCIE passthrough, IOMMU interrupt remapping is essential.

To check whether your processor supports IOMMU interrupt remapping, run the command below:

$ dmesg | grep -i remap

 

If your processor supports IOMMU interrupt remapping, you will see some sort of output confirming that interrupt remapping is enabled. Otherwise, you will see no outputs.

If IOMMU interrupt remapping is not supported on your processor, you will have to configure unsafe interrupts on your Proxmox VE server to passthrough PCI/PCIE devices on Proxmox VE virtual machines.

To configure unsafe interrupts on Proxmox VE, create a new file iommu_unsafe_interrupts.conf in the /etc/modprobe.d directory and open it with the nano text editor as follows:

$ nano /etc/modprobe.d/iommu_unsafe_interrupts.conf

 

Add the following line in the iommu_unsafe_interrupts.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

options vfio_iommu_type1 allow_unsafe_interrupts=1

 

Once you’re done, you must update the initramfs of your Proxmox VE server.

 

What to do if my GPU (or PCI/PCIE Device) is not in its own IOMMU Group?

If your server has multiple PCI/PCIE slots, you can move the GPU to a different PCI/PCIE slot and see if the GPU is in its own IOMMU group.

If that does not work, you can try enabling the ACS override kernel patch on Proxmox VE.

To try enabling the ACS override kernel patch on Proxmox VE, open the /etc/default/grub file with the nano text editor as follows:

$ nano /etc/default/grub

 

Add the kernel boot option pcie_acs_override=downstream at the end of the GRUB_CMDLINE_LINUX_DEFAULT.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.

You should have better IOMMU grouping once your Proxmox VE server boots.

If your GPU still does not have its own IOMMU group, you can go one step further by using the pcie_acs_override=downstream,multifunction instead. You should have an even better IOMMU grouping.

 

If pcie_acs_override=downstream,multifunction results in better IOMMU grouping that pcie_acs_override=downstream, then why use pcie_acs_override=downstream at all?

Well, the purpose of PCIE ACS override is to fool the kernel into thinking that the PCIE devices are isolated when they are not in reality. So, PCIE ACS override comes with security and stability issues. That’s why you should try using a less aggressive PCIE ACS override option pcie_acs_override=downstream first and see if your problem is solved. If pcie_acs_override=downstream does not work, only then you should use the more aggressive option pcie_acs_override=downstream,multifunction.

 

How do I Blacklist AMD GPU Drivers on Proxmox VE?

If you want to passthrough an AMD GPU on Proxmox VE virtual machines, you must blacklist the AMD GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the AMD GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist radeon

blacklist amdgpu

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?

If you want to passthrough an NVIDIA GPU on Proxmox VE virtual machines, you must blacklist the NVIDIA GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the NVIDIA GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist nouveau

blacklist nvidia

blacklist nvidiafb

blacklist nvidia_drm

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How do I Blacklist Intel GPU Drivers on Proxmox VE?

If you want to passthrough an Intel GPU on Proxmox VE virtual machines, you must blacklist the Intel GPU drivers and make sure that it uses the VFIO driver instead.

First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/blacklist.conf

 

To blacklist the Intel GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

blacklist snd_hda_intel

blacklist snd_hda_codec_hdmi

blacklist i915

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?

To check if your GPU or desired PCI/PCIE devices are using the VFIO driver, run the following command:

$ lspci -v

 

If your GPU or PCI/PCIE device is using the VFIO driver, you should see the line Kernel driver in use: vfio-pci as marked in the screenshot below.

 

I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the AMD GPU drivers is not enough, you also have to configure the AMD GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the AMD GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep radeon pre: vfio-pci

softdep amdgpu pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the NVIDIA GPU drivers is not enough, you also have to configure the NVIDIA GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the NVIDIA GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

softdep nvidiafb pre: vfio-pci

softdep nvidia_drm pre: vfio-pci

softdep drm pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?

At times, blacklisting the Intel GPU drivers is not enough, you also have to configure the Intel GPU drivers to load after the VFIO driver.

To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/vfio.conf

 

To configure the Intel GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file.

softdep snd_hda_intel pre: vfio-pci

softdep snd_hda_codec_hdmi pre: vfio-pci

softdep i915 pre: vfio-pci

 

Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.

 

Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?

In the /etc/modprobe.d/vfio.conf file, you must add the IDs of all the PCI/PCIE devices that you want to use the VFIO driver in a single line. One device per line won’t work.

For example, if you have 2 GPUs that you want to configure to use the VFIO driver, you must add their IDs in a single line in the /etc/modprobe.d/vfio.conf file as follows:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>

 

If you want to add another GPU to the list, just append it at the end of the existing vfio-pci line in the /etc/modprobe.d/vfio.conf file as follows:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>,<GPU-3>,<GPU-3-Audio>

 

Never do this. Although it looks much cleaner, it won’t work. I do wish we could specify PCI/PCIE IDs this way.

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>

options vfio-pci ids=<GPU-2>,<GPU-2-Audio>

options vfio-pci ids=<GPU-3>,<GPU-3-Audio>

 

Why Disable VGA Arbitration for the GPUs and How to Do It?

If you’re using UEFI/OVMF BIOS on the Proxmox VE virtual machine where you want to passthrough the GPU, you can disable VGA arbitration which will reduce the legacy codes required during boot.

To disable VGA arbitration for the GPUs, add disable_vga=1 at the end of the vfio-pci option in the /etc/modprobe.d/vfio.conf file as shown below:

options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio> disable_vga=1

 

What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?

Even after doing everything correctly, if your GPU still does not use the VFIO driver, you will need to try booting Proxmox VE with kernel options that disable the video framebuffer.

On Proxmox VE 7.1 and older, the nofb nomodeset video=vesafb:off video=efifb:off video=simplefb:off kernel options disable the GPU framebuffer for your Proxmox VE server.

On Proxmox VE 7.2 and newer, the initcall_blacklist=sysfb_init kernel option does a better job at disabling the GPU framebuffer for your Proxmox VE server.

Open the GRUB bootloader configuration file /etc/default/grub file with the nano text editor with the following command:

$ nano /etc/default/grub

 

Add the kernel option initcall_blacklist=sysfb_init at the end of the GRUB_CMDLINE_LINUX_DEFAULT.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.

 

GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?

Once you’ve passed a GPU to a Proxmox VE virtual machine, make sure to use the Default Graphics card before you start the virtual machine. This way, you will be able to access the display of the virtual machine from the Proxmox VE web management UI, download the GPU driver installer on the virtual machine, and install it on the virtual machine.

Once the GPU driver is installed on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU that you’ve passed to the virtual machine as well.

 

Once the GPU driver is installed on the virtual machine and the screen of the virtual machine is displayed on the monitor connected to the GPU (passed to the virtual machine), power off the virtual machine and set the Display Graphic card of the virtual machine to none.

Once you’re set, the next time you power on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU (passed to the virtual machine) only, nothing will be displayed on the Proxmox VE web management UI. This way, you will have the same experience as using a real computer even though you’re using a virtual machine.

 

Remember, never use SPICE, VirtIO GPU, and VirGL GPU Display Graphic card on the Proxmox VE virtual machine that you’re configuring for GPU passthrough as it has a high chance of failure.

 

What is AMD Vendor Reset Bug and How to Solve it?

AMD GPUs have a well-known bug called “vendor reset bug”. Once an AMD GPU is passed to a Proxmox VE virtual machine, and you power off this virtual machine, you won’t be able to use the AMD GPU in another Proxmox VE virtual machine. At times, your Proxmox VE server will become unresponsive as a result. This is called the “vendor reset bug” of AMD GPUs.

The reason this happens is that AMD GPUs can’t reset themselves correctly after being passed to a virtual machine. To fix this problem, you will have to reset your AMD GPU properly. For more information on installing the AMD vendor reset on Proxmox VE, read this article and read this thread on Proxmox VE forum. Also, check the vendor reset GitHub page.

 

How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?

If you’ve installed the GPU on the first slot of your motherboard, you might not be able to passthrough the GPU in a Proxmox VE virtual machine by default. Some motherboards shadow the vBIOS of the GPU installed on the first slot by default which is the reason the GPU installed on the first slot of those motherboards can’t be passed to virtual machines.

The solution to this problem is to install the GPU on the second slot of the motherboard, extract the vBIOS of the GPU, install the GPU on the first slot of the motherboard, and passthrough the GPU to a Proxmox VE virtual machine along with the extracted vBIOS of the GPU.

NOTE: To learn how to extract the vBIOS of your GPU, read this article.

Once you’ve obtained the vBIOS for your GPU, you must store the vBIOS file in the /usr/share/kvm/ directory of your Proxmox VE server to access it.

Once the vBIOS file for your GPU is stored in the /usr/share/kvm/ directory, you need to configure your virtual machine to use it. Currently, there is no way to specify the vBIOS file for PCI/PCIE devices of Proxmox VE virtual machines from the Proxmox VE web management UI. So, you will have to do everything from the Proxmox VE shell/command-line.

You can find the Proxmox VE virtual machine configuration files in the /etc/pve/qemu-server/ directory of your Proxmox VE server. Each Proxmox VE virtual machine has one configuration file in this directory in the format <VM-ID>.conf.

For example, to open the Proxmox VE virtual machine configuration file (for editing) for the virtual machine ID 100, you will need to run the following command:

$ nano /etc/pve/qemu-server/100.conf

 

In the virtual machine configuration file, you will need to append romfile=<vBIOS-filename> in the hostpciX line which is responsible for passing the GPU on the virtual machine.

For example, if the vBIOS filename for my GPU is gigabyte-nvidia-1050ti.bin, and I have passed the GPU on the first slot (slot 0) of the virtual machine (hostpci0), then in the 100.conf file, the line should be as follows:

hostpci0: <PCI-ID-of-GPU>,x-vga=on,romfile=gigabyte-nvidia-1050ti.bin

 

Once you’re done, save the virtual machine configuration file by pressing <Ctrl> + X followed by Y and <Enter>, start the virtual machine, and check if the GPU passthrough is working.

 

What to do if Some Apps Crash the Proxmox VE Windows Virtual Machine?

Some apps such as GeForce Experience, Passmark, etc. might crash Proxmox VE Windows virtual machines. You might also experience a sudden blue screen of death (BSOD) on your Proxmox VE Windows virtual machines. The reason it happens is that the Windows virtual machine might try to access the model-specific registers (MSRs) that are not actually available and depending on how your hardware handles MSRs requests, your system might crash.

The solution to this problem is ignoring MSRs messages on your Proxmox VE server.

To configure MSRs on your Proxmox VE server, open the /etc/modprobe.d/kvm.conf file with the nano text editor as follows:

$ nano /etc/modprobe.d/kvm.conf

 

To ignore MSRs on your Proxmox VE server, add the following line to the /etc/modprobe.d/kvm.conf file.

options kvm ignore_msrs=1

 

Once MSRs are ignored, you might see a lot of MSRs warning messages in your dmesg system log. To avoid that, you can ignore MSRs as well as disable logging MSRs warning messages by adding the following line instead:

options kvm ignore_msrs=1 report_ignored_msrs=0

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/kvm.conf file and update the initramfs of your Proxmox VE server for the changes to take effect.

 

How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?

If you’ve passed the GPU to a Linux Proxmox VE virtual machine and you’re getting bad audio quality on the virtual machine, you will need to enable MSI (Message Signal Interrupt) for the audio device on the Proxmox VE virtual machine.

To enable MSI on the Linux Proxmox VE virtual machine, open the /etc/modprobe.d/snd-hda-intel.conf file with the nano text editor on the virtual machine with the following command:

$ sudo nano /etc/modprobe.d/snd-had-intel.conf

 

Add the following line and save the file by pressing <Ctrl> + X followed by Y and <Enter>.

options snd-hda-intel enable_msi=1

 

For the changes to take effect, reboot the Linux virtual machine with the following command:

$ sudo reboot

 

Once the virtual machine boots, check if MSI is enabled for the audio device with the following command:

$ sudo lspci -vv

 

If MSI is enabled for the audio device on the virtual machine, you should see the marked line in the audio device information.

 

How to Update Proxmox VE initramfs?

Every time you make any changes to files in the /etc/modules-load.d/ and /etc/modprobe.d/ directories, you must update the initramfs of your Proxmox VE 8 installation with the following command:

$ update-initramfs -u -k all

 

Once Proxmox VE initramfs is updated, reboot your Proxmox VE server for the changes to take effect.

$ reboot

 

How to Update Proxmox VE GRUB Bootloader?

Every time you update the Proxmox VE GRUB boot configuration file /etc/default/grub, you must update the GRUB bootloader for the changes to take effect.

To update the Proxmox VE GRUB bootloader with the new configurations, run the following command:

$ update-grub2

 

Once the GRUB bootloader is updated with the new configuration, reboot your Proxmox VE server for the changes to take effect.

$ reboot

 

Conclusion

In this article, have discussed some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.

 

References

  1. [TUTORIAL] – PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration | Proxmox Support Forum
  2. Ultimate Beginner’s Guide to Proxmox GPU Passthrough
  3. Reading and Writing Model Specific Registers in Linux
  4. The MSI Driver Guide HOWTO — The Linux Kernel documentation

 

 

Proxmox VE (Virtualization Environment) is an open-source enterprise virtualization and containerization platform. It has a built-in user-friendly web interface for managing virtual machines and LXC containers. It has other features such as Ceph software-defined storage (SDS), software-defined networking (SDN), high availability (HA) clustering, and many more.

After the recent Broadcom acquisition of VMware, the cost of VMware products has risen to the point that many small to medium-sized companies are/will be forced to switch to alternate products. Even the free VMware ESXi is discontinued which is bad news for homelab users. Proxmox VE is one of the best alternatives to VMware vSphere and it has the same set of features as VMware vSphere (with a few exceptions of course). Proxmox VE is open-source and free, which is great for home labs as well as businesses. Proxmox VE also has an optional enterprise subscription option that you can purchase if needed.

In this article, I will show you how to install Proxmox VE 8 on your server. I will cover Graphical UI-based installation methods of Proxmox VE and Terminal UI-based installation for systems having problems with the Graphical UI-based installer.

 

Table of Contents

  1. Booting Proxmox VE 8 from a USB Thumb Drive
  2. Installing Proxmox VE 8 using Graphical UI
  3. Installing Proxmox VE 8 using Terminal UI
  4. Accessing Proxmox VE 8 Management UI from a Web Browser
  5. Enabling Proxmox VE Community Package Repositories
  6. Keeping Proxmox VE Up-to-date
  7. Conclusion
  8. References

 

Booting Proxmox VE 8 from a USB Thumb Drive

First, you need to download the Proxmox VE 8 ISO image and create a bootable USB thumb drive of Proxmox VE 8. If you need any assistance on that, read this article.

Once you’ve created a bootable USB thumb drive of Proxmox VE 8, power off your server, insert the bootable USB thumb drive on your server, and boot the Proxmox VE 8 installer from it. Depending on the motherboard manufacturer, you need to press a certain key after pressing the power button to boot from the USB thumb drive. If you need any assistance on booting your server from a USB thumb drive, read this article.

Once you’ve successfully booted from the USB thumb drive, the Proxmox VE GRUB menu should be displayed.

 

Installing Proxmox VE 8 using Graphical UI

To install Proxmox VE 8 using a graphical user interface, select Install Proxmox VE (Graphical) from the Proxmox VE GRUB menu and press <Enter>.

 

The Proxmox VE installer should be displayed.

Click on I agree.

 

Now, you have to configure the disk for the Proxmox VE installation.

You can configure the disk for Proxmox VE installation in different ways:

  1. If you have a single 500GB/1TB (or larger capacity) SSD/HDD on your server, you can use it for Proxmox VE installation as well as storing virtual machine images, container images, snapshots, backups, ISO images, and so on. That’s not very safe, but you can try out Proxmox this way without needing a lot of hardware resources.
  2. You can use a small 64GB or 128GB SSD for Proxmox VE installation only. Once Proxmox VE is installed, you can create additional storage pools for storing virtual machine images, container images, snapshots, backups, ISO images, and so on.
  3. You can create a big ZFS or BTRFS RAID for Proxmox VE installation which will also be used for storing virtual machine images, container images, snapshots, backups, ISO images, and so on.

 

a) To install Proxmox VE on a single SSD/HDD and also use the SSD/HDD for storing virtual machine and container images, ISO images, virtual machine and container snapshots, virtual machine and container backups, etc., select the SSD/HDD from the Target Harddisk dropdown menu[1] and click on Next[2].

Proxmox VE will use a small portion of the free disk space for the Proxmox VE root filesystem and the rest of the disk space will be used for storing virtual machine and container data.

 

If you want to change the filesystem of your Proxmox VE installation or configure the size of different Proxmox VE partitions/storages, select the HDD/SSD you want to use for your Proxmox VE installation from the Target Harddisk dropdown menu and click on Options.

 

An advanced disk configuration window should be displayed.

From the Filesystem dropdown menu, select your desired filesystem. ext4 and xfs filesystems are supported for single-disk Proxmox VE installation at the time of this writing[1].

Other storage configuration parameters are:

hdsize[2]: By default Proxmox VE will use all the disk space of the selected HDD/SSD. To keep some disk space free on the selected HDD/SSD, type in the amount of disk space (in GB) that you want Proxmox VE to use and the rest of the disk space should be free.

swapsize[3]: By default, Proxmox VE will use 4GB to 8GB of disk space for swap depending on the amount of memory/RAM you have installed on the server. To set a custom swap size for Proxmox VE, type in your desired swap size (in GB unit) here.

maxroot[4]: Defines the maximum disk space to use for the Proxmox VE LVM root volume/filesystem.

minfree[5]: Defines the minimum disk space that must be free in the Proxmox VE LVM volume group (VG). This space will be used for LVM snapshots.

maxvz[6]: Defines the maximum disk space to use for the Proxmox VE LVM data volume where virtual machine and container data/images will be stored.

Once you’re done with the disk configuration, click on OK[7].

 

To install Proxmox VE on disk with your desired storage configuration, click on Next.

 

b) To install Proxmox VE on a small SSD and create the necessary storage for the virtual machine and container data later, select the SSD from the Target Harddisk dropdown menu[1] and click on Options[2].

 

Set maxvz to 0 to disable virtual machine and container storage on the SSD where Proxmox VE will be installed and click on OK.

 

Once you’re done, click on Next.

 

c) To create a ZFS or BTRFS RAID and install Proxmox VE on the RAID, click on Options.

 

You can pick different ZFS and BTRFS RAID types from the Filesystem dropdown menu. Each of these RAID types works differently and requires a different number of disks. For more information on how different RAID types work, their requirements, features, data safety, etc, read this article.

 

RAID0, RAID1, and RAID10 are discussed in this article thoroughly. RAIDZ-1 and RAIDZ-2 work in the same way as RAID5 and RAID6 respectively. RAID5 and RAID6 are also discussed in this article.

RAIDZ-1 requires at least 2 disks (3 disks recommended), uses a single parity, and can sustain only 1 disk failure.

RAIDZ-2 requires at least 3 disks (4 disks recommended), uses double parity, and can sustain 2 disks failure.

RAIDZ-3 requires at least 4 disks (5 disks recommended), uses triple parity, and can sustain 3 disks failure.

 

Although you can create BTRFS RAIDs on Proxmox VE, at the time of this writing, BTRFS on Proxmox VE is still in technology preview. So, I don’t recommend using it in production systems. I will demonstrate ZFS RAID configuration on Proxmox VE in this article.

 

To create a ZFS RAID for Proxmox VE installation, select your desired ZFS RAID type from the Filesystem dropdown menu[1]. From the Disk Setup tab, select the disks that you want to use for the ZFS RAID using the Harddisk X dropdown menus[2]. If you don’t want to use a disk for the ZFS RAID, select – do not use – from the respective Harddisk X dropdown menu[3].

 

From the Advanced Options tab, you can configure different ZFS filesystem parameters.

ashift[1]: You can set ZFS block size using this option. The block size is calculated using the formula 2ashift. The default ashift value is 12, which is 212 = 4096 = 4 KB block size. 4KB block size is good for SSDs. If you’re using a mechanical hard drive (HDD), you need to set ashift to 9 (29 = 512 bytes) as HDDs use 512 bytes block size.

compress[2]: You can enable/disable ZFS compression from this dropdown menu. To enable compression, set compression to on. To disable compression, set compression to off. When compression is on, the default ZFS compression algorithm (lz4 at the time of this writing) is used. You can select other ZFS compression algorithms (i.e. lzjb, zle, gzip, zstd) as well if you have such preferences.

checksum[3]: ZFS checksums are used to detect corrupted files so that they can be repaired. You can enable/disable ZFS checksum from this dropdown menu. To enable ZFS checksum, set checksum to on. To disable ZFS checksum, set checksum to off. When checksum is on, the fletcher4 algorithm is used for non-deduped (deduplication disabled) datasets and sha256 algorithm is used for deduped (deduplication enabled) datasets by default.

copies[4]: You can set the number of redundant copies of the data you want to keep in your ZFS RAID. This is in addition to the RAID level redundancy and provides extra data protection. The default number of copy is 1 and you can store 3 copies of data at max in your ZFS RAID. This feature is also known as ditto blocks.

ARC max size[5]: You can set the maximum amount of memory ZFS is allowed to use for the Adaptive Replacement Cache (ARC) from here.

hdsize[6]: By default all the free disk space is used for the ZFS RAID. If you want to keep some portion of the disk space of each SSD free and use the rest for the ZFS RAID, type in the disk space you want to use (in GB) here. For example, if you have 40GB disks and you want to use 35GB of each disk for the ZFS RAID and keep 5GB of disk space free on each disks, you will need to type in 35GB in here.

Once you’re done with the ZFS RAID configuration, click on OK[7].

 

Once you’re done with the ZFS storage configuration, click on Next to continue.

 

Type in the name of your country[1], select your time zone[2], select your keyboard layout[3], and click on Next[4].

 

Type in your Proxmox VE root password[1] and your email[2].

Once you’re done, click on Next[3].

 

If you have multiple network interfaces available on your server, select the one you want to use for accessing the Proxmox VE web management UI from the Management Interface dropdown menu[1]. If you have only a single network interface available on your server, it will be selected automatically.

Type in the domain name that you want to use for Proxmox VE in the Hostname (FQDN) section[2].

Type in your desired IP information for the Proxmox VE server[3] and click on Next[4].

 

An overview of your Proxmox VE installation should be displayed. If everything looks good, click on Install to start the Proxmox VE installation.

NOTE: If anything seems wrong or you want to change certain information, you can always click on Previous to go back and fix it. So, make sure to check everything before clicking on Install.

 

The Proxmox VE installation should start. It will take a while to complete.

 

Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.

 

On the next boot, you will see the Proxmox VE GRUB boot menu.

 

Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt.

You will also see the access URL of the Proxmox VE web-based management UI.

 

Installing Proxmox VE 8 using Terminal UI

In some hardware, the Proxmox VE graphical installer may not work. In that case, you can always use the Proxmox VE terminal installer. You will find the same options in the Proxmox VE terminal installer as in the graphical installer. So, you should not have any problems installing Proxmox VE on your server using the terminal installer.

To use the Proxmox VE terminal installer, select Install Proxmox VE (Terminal UI) from the Proxmox VE GRUB boot menu and press <Enter>.

 

Select <I agree> and press <Enter>.

 

To install Proxmox VE on a single disk, select an HDD/SSD from the Target harddisk section, select <Next>, and press <Enter>.

 

For advanced disk configuration or ZFS/BTRFS RAID setup, select <Advanced options> and press <Enter>.

 

You will find the same disk configuration options as in the Proxmox VE graphical installer. I have already discussed all of them in the Proxmox VE Graphical UI installation section. Make sure to check it out for detailed information on all of those disk configuration options.

Once you’ve configured the disk/disks for the Proxmox VE installation, select <Ok> and press <Enter>.

 

Once you’re done with advanced disk configuration for your Proxmox VE installation, select <Next> and press <Enter>.

 

Select your country, timezone, and keyboard layout.

Once you’re done, select <Next> and press <Enter>.

 

Type in your Proxmox VE root password and email address.

Once you’re done, select <Next> and press <Enter>.

 

Configure the management network interface for Proxmox VE, select <Next>, and press <Enter>.

 

An overview of your Proxmox VE installation should be displayed. If everything looks good, select <Install> and press <Enter> to start the Proxmox VE installation.

NOTE: If anything seems wrong or you want to change certain information, you can always select <Previous> and press <Enter> to go back and fix it. So, make sure to check everything before installing Proxmox VE.

 

The Proxmox VE installation should start. It will take a while to complete.

 

Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.

 

Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt.

You will also see the access URL of the Proxmox VE web-based management UI.

 

Accessing Proxmox VE 8 Management UI from a Web Browser

To access the Proxmox VE web-based management UI from a web browser, you need a modern web browser (i.e. Google Chrome, Microsoft Edge, Mozilla Firefox, Opera, Apple Safari).

Open a web browser of your choice and visit the Proxmox VE access URL (i.e. https://192.168.0.105:8006) from the web browser.

By default, Proxmox VE uses a self-signed SSL certificate which your web browser will not trust. So, you will see a similar warning.

To accept the Proxmox VE self-signed SSL certificate, click on Advanced.

 

Then, click on Accept the Risk and Continue.

 

You will see the Proxmox VE login prompt.

Type in your Proxmox VE login username (root) and password[1] and click on Login[2].

 

You should be logged in to your Proxmox VE web-management UI.

As you’re using the free version of Proxmox VE, you will see a No valid subscription warning message every time you log in to Proxmox VE. To ignore this warning and continue using Proxmox VE for free, just click on OK.

 

The No valid subscription warning should be gone. Proxmox VE is now ready to use.

 

Enabling Proxmox VE Community Package Repositories

If you want to use Proxmox VE for free, after installing Proxmox VE on your server, one of the first things you want to do is disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories. This way, you can get access to the Proxmox VE package repositories for free and keep your Proxmox VE server up-to-date.

To learn how to enable the Proxmox VE community package repositories, read this article.

 

Keeping Proxmox VE Up-to-date

After installing Proxmox VE on your server, you should check if new updates are available for your Proxmox VE server. If new updates are available, you should install them as it will improve the performance, stability, and security of your Proxmox VE server.

For more information on keeping your Proxmox VE server up-to-date, read this article.

 

Conclusion

In this article, I have shown you how to install Proxmox VE on your server using the Graphical installer UI and the Terminal installer UI. The Proxmox VE Terminal installer UI installer is for systems that don’t support the Proxmox VE Graphical installer UI. So, if you’re having difficulty with the Proxmox VE Graphical installer UI, the Terminal installer UI will still work and save your day. I have also discussed and demonstrated different disk/storage configuration methods for Proxmox VE as well as configuring ZFS RAID for Proxmox VE and installing Proxmox VE on the ZFS RAID as well.

 

References

  1. RAIDZ Types Reference
  2. ZFS/Virtual disks – ArchWiki
  3. ZFS Tuning Recommendations | High Availability
  4. The copies Property
  5. Checksums and Their Use in ZFS — OpenZFS documentation
  6. ZFS ARC Parameters – Oracle Solaris Tunable Parameters Reference Manual

Most of the operating system distributes their installer program in ISO image format. So, the most common way of installing an operating system on a Proxmox VE virtual machine is using an ISO image of that operating system. You can obtain the ISO image file of your favorite operating systems from their official website.

To install your favorite operating system on a Proxmox VE virtual machine, the ISO image of that operating system must be available in a proper storage location on your Proxmox VE server.

The Proxmox VE storage that supports ISO image files has a section ISO Images and has options for uploading and downloading ISO images.

 

In this article, I will show you how to upload an ISO image to your Proxmox VE server from your computer. I will show you how to download an ISO image directly on your Proxmox VE server using the download links or URL of that ISO image.

 

Table of Contents

  1. Uploading an ISO Image on Proxmox VE Server from Your Computer
  2. Downloading an ISO Image on Proxmox VE Server using URL
  3. Conclusion

 

Uploading an ISO Image on Proxmox VE Server from Your Computer

To upload an ISO image on your Proxmox VE server from your computer, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Upload.

 

Click on Select File from the Upload window.

 

Select the ISO image file that you want to upload on your Proxmox VE server from the filesystem of your computer[1] and click on Open[2].

 

Once the ISO image file is selected, the ISO image file name will be displayed in the File name section. If you want, you can modify the ISO image file name which will be stored on your Proxmox VE server once it’s uploaded[1].

The size of the ISO image file will be displayed in the File size section[2].

Once you’re ready to upload the ISO image on your Proxmox VE server, click on Upload[3].

 

The ISO image file is being uploaded to the Proxmox VE server. It will take a few seconds to complete.

If for some reason you want to stop the upload process, click on Abort.

 

Once the ISO image file is uploaded to your Proxmox VE server, you will see the following window. Just close it.

 

Shortly, the ISO image that you’ve uploaded to your Proxmox VE server should be listed in the ISO Images section of the selected Proxmox VE storage.

 

Downloading an ISO Image on Proxmox VE Server using URL

To upload an ISO image on your Proxmox VE server using a URL or download link, visit the official website of the operating system that you want to download and copy the download link or URL of the ISO image from the website.

For example, to download the ISO image of Debian 12, visit the official website of Debian from a web browser[1], right-click on Download, and click on Copy Link[2].

 

Then, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Download from URL.

 

Paste the download link or URL of the ISO image in the URL section and click on Query URL.

 

Proxmox VE should check the ISO file URL and obtain the necessary information like the File name[1] and File size[2] of the ISO image file. If you want to save the ISO image file in a different name on your Proxmox VE server, just type it in the File name section[1].

Once you’re ready, click on Download[3].

 

Proxmox VE should start downloading the ISO image file from the URL. It will take a while to complete.

 

Once the ISO image file is downloaded on your Proxmox VE server, you will see the following window. Just close it.

 

The downloaded ISO image file should be listed in the ISO Images section of the selected Proxmox VE storage.

 

Conclusion

In this article, I have shown you how to upload an ISO image from your computer on the Proxmox VE server. I have also shown you how to download an ISO image using a URL directly on your Proxmox VE server.

Keeping your Proxmox VE server up-to-date is important as newer updates come with bug fixes and improved security.

If you’re using the Proxmox VE community version (the free version of Proxmox VE without an enterprise subscription), installing new updates will also add new features to your Proxmox VE server as they are released.

In this article, I am going to show you how to check if new updates are available on your Proxmox VE server. If updates are available, I will also show you how to install the available updates on your Proxmox VE server.

 

Table of Contents

  1. Enabling the Proxmox VE Community Package Repositories
  2. Checking for Available Updates on Proxmox VE
  3. Installing Available Updates on Proxmox VE
  4. Conclusion

 

Enabling the Proxmox VE Community Package Repositories

If you don’t have an enterprise subscription on your Proxmox VE server, you need to disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories to receive software updates on your Proxmox VE server.

If you want to keep using Proxmox VE for free, make sure to enable the Proxmox VE community package repositories.

 

Checking for Available Updates on Proxmox VE

To check if new updates are available on your Proxmox VE server, log in to your Proxmox VE web-management UI, navigate to the Updates section of your Proxmox VE server, and click on Refresh.

 

If you’re using the Proxmox VE community version (free version), you will see a No valid subscription warning. Click on OK to ignore the warning.

 

The Proxmox VE package database should be updated.

Close the Task viewer window.

 

If newer updates are not available, then you will see the No updates available message after the Proxmox VE package database is updated.

 

If newer updates are available for your Proxmox VE server, you will see a list of packages that can be updated as shown in the screenshot below.

 

Installing Available Updates on Proxmox VE

To install all the available updates on your Proxmox VE server, click on Upgrade.

 

A new NoVNC window should be displayed.

Press Y and then press <Enter> to confirm the installation.

 

The Proxmox VE updates are being downloaded. It will take a while to complete.

 

The Proxmox VE updates are being installed. It will take a while to complete.

 

At this point, the Proxmox VE updates should be installed.

Close the NoVNC window.

 

If you check for Proxmox VE updates, you should see the No updates available message. Your Proxmox VE server should be up-to-date[1].

After the updates are installed, it’s best to reboot your Proxmox VE server. To reboot your Proxmox VE server, click on Reboot[2].

 

Conclusion

In this article, I have shown you how to check if new updates are available for your Proxmox VE server. If new updates are available, I have also shown you how to install the available updates on your Proxmox VE server. You should always keep your Proxmox VE server up-to-date so that you get the latest bug fixes and security updates.

The full form of SR-IOV is Single Root I/O Virtualization. Some PCI/PCIE devices have multiple virtual functions and each of these virtual functions can be passed to a different virtual machine. SR-IOV is the technology that allows this type of PCI/PCIE passthrough.

For example, an 8-port SR-IOV capable network card has 8 virtual functions, 1 for each port. 8 of these virtual functions or network ports can be passed to 8 different virtual machines (VMs).

In this article, we will show you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).

 

Table of Contents

  1. How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards
  2. How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards
  3. How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards
  4. How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards
  5. Conclusion
  6. References

 

How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards

To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”.

Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”.

To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”.

For both AMD and Intel systems, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”.

To save the changes, press <F10>, select OK, and press <Enter>.

The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.

 

How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards

To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer.

If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”.

If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”.

You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard.

If you have an AMD processor, navigate to “PCI Configuration” and set “SR-IOV Support” to “Enabled”.

If you have an Intel processor, navigate to “Chipset Configuration” and set “SR-IOV Support” to “Enabled”.

To save the changes, press <F10>, select Yes, and press <Enter>.

The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.

 

How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards

To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”.

Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the SR-IOV on your MSI motherboard, you have to enter the “Advanced Mode”.

To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”.

From the “Advanced Mode”, navigate to “Settings”.

If you’re using an AMD processor, navigate to “Advanced” > “PCI Subsystem Settings” and set “SR-IOV Support” to “Enabled”.

If you’re using an Intel processor, navigate to “Advanced” > “PCIe/PCI Sub-system Settings” and set “SR-IOV Support” to “Enabled”.

NOTE: You may not find the “SR-IOV Support” option in the BIOS/UEFI firmware of your MSI motherboard. In that case, you can try updating the BIOS/UEFI firmware version and see if the option is available.

To save the changes, press <F10>, select Yes, and press <Enter>.

The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.

 

How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards

To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”.

To enable SR-IOV, you have to switch to “Advanced Mode”. If you’re in “Easy Mode”, press <F2> to switch to “Advanced Mode”.

If you have an AMD processor, navigate to the “Settings” tab, navigate to “IO Ports”,  and set “SR-IOV Support” to “Enabled”.

If you have an Intel processor, navigate to the “Advanced” tab, navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”.

NOTE: On newer Gigabyte motherboards (i.e. Z590, Z690, Z790), the SR-IOV option might be missing. In that case, try enabling Intel virtualization technology VT-x/VT-d and see if the SR-IOV option is displayed on the BIOS/UEFI firmware of your Gigabyte motherboard.

To save the changes, press <F10>, select Yes, and press <Enter>.

SR-IOV should be enabled for your processor. For more information on enabling SR-IOV on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.

 

Conclusion

We showed you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).

 

References

  1. ASUS ROG Maximus Z690 Hero BIOS Overview
  2. ASUS ROG STRIX X570-E Gaming WIFI II BIOS Walk Thru
  3. ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF]
  4. ASRock Z690 Taichi BIOS Overview
  5. SR-IOV on MSI X470 Gaming Pro | MSI Global English Forum
  6. Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670]
  7. ASUS PRIME Z490-A BIOS Overview

 

The full form of IOMMU is Input Output Memory Management Unit. IOMMU maps the virtual addresses of a device to physical addresses which allows the device to be passed to a virtual machine (VM).

VT-D does the same thing as IOMMU. The main difference is that IOMMU is developed by AMD while VT-D is developed by Intel.

In this article, we will show you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).

 

Table of Contents

  1. How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards
  2. How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards
  3. How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards
  4. How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards
  5. Conclusion
  6. References

 

How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards

To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”.

Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”.

To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”.

If you have an AMD processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “AMD CBS”, and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASUS motherboard.

If you have an Intel processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “System Agent (SA) Configuration”, set “VT-d” to “Enabled”, and set “Control Iommu Pre-boot Behavior” to “Enable IOMMU during boot” from the BIOS/UEFI Firmware of your ASUS motherboard.

To save the changes, press <F10>, select OK, and press <Enter>.

The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.

 

How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards

To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer.

If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”.

If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”.

You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard.

If you have an AMD processor, navigate to “AMD CBS” > “NBIO Common Options” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard.

If you have an Intel processor, navigate to “Chipset Configuration” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard.

To save the changes, press <F10>, select Yes, and press <Enter>.

The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.

 

How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards

To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”.

Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the IOMMU/VT-d on your MSI motherboard, you have to enter the “Advanced Mode”.

To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”.

Navigate to “OC settings”, scroll down to “CPU Features”, and press <Enter>.

If you have an AMD processor, set “IOMMU” to “Enabled”.

If you have an Intel processor, set “Intel VT-D Tech” to “Enabled”.

To save the changes, press <F10>, select Yes, and press <Enter>.

The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.

 

How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards

To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer.

The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”.

To enable IOMMU/VT-d, you have to switch to the “Advanced Mode” of the BIOS/UEFI Firmware of your Gigabyte motherboard. If you’re in “Easy Mode”, you can press <F2> to switch to “Advanced Mode” on the BIOS/UEFI Firmware of your Gigabyte motherboard.

Use the arrow keys to navigate to the “Settings” tab.

If you have an AMD processor, navigate to “Miscellaneous” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard.

If you have an Intel processor, navigate to “Miscellaneous” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard.

To save the changes, press <F10>, select Yes, and press <Enter>.

IOMMU/VT-d should be enabled for your processor. For more information on enabling IOMMU/VT-d on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.

 

Conclusion

We showed you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).

 

References

  1. ROG STRIX Z690 series BIOS Manual ( English Edition )
  2. ASUS ROG Maximus Z690 Hero BIOS Overview
  3. ROG STRIX X670E Series BIOS Manual ( English Edition )
  4. ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF]
  5. ASRock Z690 Taichi BIOS Overview
  6. MSI MEG Z690 ACE BIOS Overview
  7. Pomoc techniczna cz. 1 – Ustawianie optymalne biosu i OC w płycie głównej MSI B450 Gaming Plus Max
  8. Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670]
  9. GIGABYTE Z690 Aorus Elite DDR4 Motherboard BIOS Overview

 

Proxmox VE 8 is the latest version of the Proxmox Virtual Environment. Proxmox VE is an open-source enterprise Type-I virtualization and containerization platform.

In this article, I am going to show you how to download the ISO image of Proxmox VE 8 and create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11 and Linux so that you can use it to install Proxmox VE 8 on your server and run virtual machines (VMs) and LXC containers.

 

Table of Contents

  1. Downloading the Proxmox VE 8 ISO Image
  2. Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11
  3. Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux
  4. Conclusion

 

Downloading the Proxmox VE 8 ISO Image

To download the ISO image of Proxmox VE 8, visit the official downloads page of Proxmox VE from your favorite web browser.

Once the page loads, click on Download from the Proxmox VE ISO Installer section.

 

Your browser should start downloading the Proxmox VE 8 ISO image. It will take a while to complete.

 

At this point, the Proxmox VE 8 ISO image should be downloaded.

 

Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11

On Windows 10/11, you can use Rufus to create bootable USB thumb drives of different operating systems.

To download Rufus, visit the official website of Rufus from your favorite web browser.

Once the page loads, click on the Rufus download link as marked in the screenshot below.

 

Rufus should be downloaded.

 

Insert a USB thumb drive in your computer and double-click on the Rufus app file from the Downloads folder of your Windows 10/11 system to start Rufus.

 

Click on Yes.

 

Click on No.

 

Rufus should start.

First, select your USB thumb drive from the Device dropdown menu[1].

Then, click on SELECT to select the Proxmox VE 8 ISO image[2].

 

Select the Proxmox VE 8 ISO image from the Downloads folder of your Windows 10/11 system using the file picker[1] and click on Open[2].

 

Click on OK.

 

Click on START.

 

Click on OK.

 

Click on OK.

NOTE: The contents of the USB thumb drive will be removed. So, make sure to move important files before you click on OK.

 

The Proxmox VE 8 ISO image is being written to the USB thumb drive. It will take a while to complete.

 

Once the Proxmox VE ISO image is written to the USB thumb drive, click on CLOSE.

Your USB thumb drive should be ready for installing Proxmox VE 8 on your server.

 

Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux

On Linux, you can use the dd tool to create a bootable USB thumb drive of different operating systems from ISO image.

First, insert a USB thumb drive in your computer and run the following command to find the device name of your USB thumb drive.

$ sudo lsblk -e7

 

In my case, the device name of my 32GB USB thumb drive is sda as you can see in the screenshot below.

 

Navigate to the Downloads directory of your Linux system and you should find the Proxmox VE 8 ISO image there.

$ cd ~/Downloads

$ ls -lh

 

To write the Proxmox VE 8 ISO image proxmox-ve_8.1-2.iso to the USB thumb drive sda, run the following command:

$ sudo dd if=proxmox-ve_8.1-2.iso of=/dev/sda bs=1M status=progress conv=noerror,sync

NOTE: The contents of the USB thumb drive will be erased. So, make sure to move important files before you run the command above.

 

The Proxmox VE 8 ISO image is being written to the USB thumb drive sda. It will take a while to complete.

 

At this point, the Proxmox VE 8 ISO image should be written to the USB thumb drive.

 

To safely remove the USB thumb drive from your computer, run the following command:

$ sudo eject /dev/sda

 

Your USB thumb drive should be ready for installing Proxmox VE 8 on any server.

 

Conclusion

In this article, I have shown you how to download the ISO image of Proxmox VE 8. I have also shown you how download Rufus and use it to create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11. I have shown you how to create a bootable USB thumb drive of Proxmox VE 8 on Linux using the dd command as well.

In a lab environment, lots of new users will be using JupyterHub. The default Authenticator of JupyterHub allows only the Linux system users to log in to JupyterHub. So, if you want to create a new JupyterHub user, you will have to create a new Linux user. Creating new Linux users manually might be a lot of hassle for you. Instead, you can configure JupyterHub to use FirstUseAuthenticator. FirstUseAuthenticator as the name says, automatically creates a new user while logging in to JupyterHub for the first time. After the user is created, the same username and password can be used to log in to JupyterHub.

In this article, I am going to show you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I am also going to show you how to configure JupyterHub to use the FirstUseAuthenticator.

NOTE: If you don’t have JupyterHub installed on your computer, you can read one of the articles depending on the Linux distribution you’re using:

  1. How to Install the Latest Version of JupyterHub on Ubuntu 22.04 LTS/ Debian 12/Linux Mint 21
  2. How to Install the Latest Version of JupyterHub on Fedora 38+/RHEL 9/Rocky Linux 9

 

Table of Contents:

  1. Creating a Group for JupyterHub Users
  2. Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment
  3. Configuring JupyterHub FirstUseAuthenticator
  4. Restarting the JupyterHub Service
  5. Verifying if JupyterHub FirstUseAuthenticator is Working
  6. Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator
  7. Conclusion
  8. References

 

Creating a Group for JupyterHub Users:

I want to keep all the new JupyterHub users in a Linux group jupyterhub-users for easier management.

You can create a new Linux group jupyterhub-users with the following command:

$ sudo groupadd jupyterhub-users

 

Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment:

If you’ve followed my JupyterHub Installation Guide to install JupyterHub on your favorite Linux distributions (Debian-based and RPM-based), you can install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment with the following command:

$ sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub-firstuseauthenticator

 

The JupyterHub FirstUseAuthenticator should be installed on the JupyterHub virtual environment.

 

Configuring JupyterHub FirstUseAuthenticator:

To configure the JupyterHub FirstUseAuthenticator, open the JupyterHub configuration file jupyterhub_config.py with the nano text editor as follows:

$ sudo nano /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py

 

Type in the following lines in the jupyterhub_config.py configuration file.

# Configure FirstUseAuthenticator for Jupyter Hub

from jupyterhub.auth import LocalAuthenticator

from firstuseauthenticator import FirstUseAuthenticator

 

LocalAuthenticator.create_system_users = True

LocalAuthenticator.add_user_cmd = ['useradd', '--create-home', '--gid', 'jupyterhub_users' , '--shell', '/bin/bash']

FirstUseAuthenticator.dbm_path = '/opt/jupyterhub/etc/jupyterhub/passwords.dbm'

FirstUseAuthenticator.create_users = True

 

class LocalNativeAuthenticator(FirstUseAuthenticator, LocalAuthenticator):

pass

 

c.JupyterHub.authenticator_class = LocalNativeAuthenticator

 

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the jupyterhub_config.py file.

 

Restarting the JupyterHub Service:

For the changes to take effect, restart the JupyterHub systemd service with the following command:

$ sudo systemctl restart jupyterhub.service

 

If the JupyterHub configuration file has no errors, the JupyterHub systemd service should run just fine.

 

Verifying if JupyterHub FirstUseAuthenticator is Working:

To verify whether the JupyterHub FirstUseAuthenticator is working, visit JupyterHub from your favorite web browser and try to log in as a random user with a short and easy password like 123, abc, etc.

You should see the marked error message that the password is too short and the password should be at least 7 characters long. It means that the JupyterHub FirstUseAuthenticator is working just fine.

 

Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator:

To create a new JupyterHub user using the FirstUseAuthenticator, visit the JupyterHub login page from a web browser, type in your desired login username and the password that you want to set for the new user, and click on Sign in.

 

A new JupyterHub user should be created and your desired password should be set for the new user.

Once the new user is created, the newly created user should be logged into his/her JupyterHub account.

 

The next time you try to log in as the same user with a different password, you will see the error Invalid username or password. So, once a user is created using the FirstUseAuthenticator, only that user can log in with the same username and password combination. No one else can replace this user account.

 

Conclusion:

In this article, I have shown you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I have also shown you how to configure JupyterHub to use the FirstUseAuthenticator.

 

References:

 

MySQL is a reliable and widely used DBMS that utilizes SQL and a relational model to manage data. MySQL is installed as part of LAMP in Linux, but you can install it separately.Even in Ubuntu 24.04, installing MySQL is straightforward. This guide outlines the steps to follow. Read on!

Step-By-Step Guide to Install MySQL on Ubuntu 24.04

If you have a user account on your Ubuntu 24.04 and have sudo privileges, installing MySQL requires you to follow the procedure below.

Step 1: Update the System’s Repository
When installing packages on Ubuntu, you should update the system’s repository to refresh the sources list. Doing so ensures the MySQL package you install is the latest stable version.

$ sudo apt update

Step 2: Install MySQL Server
Once the package index updates, the next step is to install the MySQL server package using the below command.

$ sudo apt install mysql-server

After the installation, start the MySQL service on your Ubuntu 24.04.

$ sudo systemctl start mysql.service

Step 3: Configure MySQL
Before we can start working with MySQL, we need to make a couple of configurations. First, access the MySQL shell using the command below.

$ sudo mysql

Once the shell opens up, set a password for your ’root’ using the below syntax.

ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘your_password’;

We’ve also specified to use the mysql_native_password authentication method.

Exit the MySQL shell.

Step 4: Run the MySQL Script
One interesting feature of MySQL is that it offers a script that you should run to quickly set it up. The script prompts you to specify different settings based on your preference. For example, you will be prompted to set a password for the root user. Go through each prompt and respond accordingly.

$ sudo mysql_secure_installation

Step 5: Modify the Authentication Method
After successfully running the MySQL installation script, you should change the authentication method and set it to use the auth_socket plugin.

Start by accessing your MySQL shell using the root account.

$ mysql -u root -p

Once logged in, run the below command to modify the authentication plugin.

ALTER USER ‘root’@’localhost’ IDENTIFIED WITH auth_socket;

Step 6: Create a MySQL User
So far, we have only access to MySQL using the root account. We should create a new user and specify what privileges they should have. When creating a new user, you must add their username and the login password using the syntax below.

create user ‘username’@’localhost’ IDENTIFIED BY ‘password’;

Now that the user is created, we need to specify what privileges the user has when using MySQL. For instance, you can give them privileges, such as CREATE, ALTER, etc., on a specific or all the databases.

Here’s an example where we’ve specified a few privileges to the added user on all available databases. Feel free to specify whichever privileges are ideal for your user.

GRANT CREATE, ALTER, INSERT, UPDATE, SELECT on *.* TO ‘username’@’localhost’ WITH GRANT OPTION;

For the new user and the privileges to apply, flush the privileges and exit MySQL.

flush privileges;

Step 7: Confirm the Created User
As the last step, we should verify that our user can access the database and has the specified privileges. Start by checking the MySQL service to ensure it is running.

$ sudo systemctl status mysql

Next, access MySQL using the credentials of the user you added in the previous step.

$ mysql -u username -p

A successful login confirms that you’ve successfully installed MySQL, configured it, and added a new user.

Conclusion

MySQL is a relational DBMS widely used for various purposes. It supports SQL in managing data, and this post discusses all the steps you should follow to install it on Ubuntu 24.04. Hopefully, you’ve installed MySQL on your Ubuntu 24.04 with the help of the covered steps.

Task Manager is an app on the Windows 10/11 operating system that is used to monitor the running apps and services of your Windows 10/11 operating system. The Task Manager app is also used for monitoring the CPU, memory, disk, network, GPU, and other hardware usage information.

A few screenshots of the Windows Task Manager app are shown below:

In this article, I am going to show you 6 different ways of opening the Task Manager app on Windows 10/11.

 

Table of Contents:

  1. Opening the Task Manager App from the Start Menu
  2. Opening the Task Manager App from the Windows Taskbar
  3. Opening the Task Manager App from Run Window
  4. Opening the Task Manager App from the Command Prompt/Terminal
  5. Opening the Task Manager App from the Windows Logon Menu
  6. Opening the Task Manager app Using the Keyboard Shortcut

 

1. Opening the Task Manager App from the Start Menu

Search for the term app:task in the Start Menu and click on the Task Manager app from the search result as marked in the screenshot below.

The Task Manager app should be opened.

 

2. Opening the Task Manager App from the Windows Taskbar

Right-click (RMB) on an empty location of the Windows taskbar and click on Task Manager.

The Task Manager app should be opened.

 

3. Opening the Task Manager App from Run Window

To open the Run window, press <Windows> + Run.

In the Run window, type in taskmgr in the Open section[1] and click on OK[2].

The Task Manager app should be opened.

 

4. Opening the Task Manager App from the Command Prompt/Terminal

To open the Terminal app, right-click (RMB) on the Start Menu and click on Terminal.

 

The Terminal app should be opened.

Type in the command taskmgr and press <Enter>. The Task Manager app should be opened.

 

5. Opening the Task Manager App from the Windows Logon Menu

To open the Windows logon menu, press <Ctrl> + <Alt> + <Delete>.

From the Windows logon menu, click on Task Manager. The Task Manager app should be opened.

 

6. Opening the Task Manager app Using the Keyboard Shortcut

To Windows 10/11 Task Manager app can be opened with the keyboard shortcut <Ctrl> + <Shift> + <Escape>.

 

Conclusion:

In this article, I have shown you how to open the Task Manager app on Windows 10/11 in 6 different ways. Feel free to use the method you like the best.

Python and R programming languages rely on Anaconda as their package and environment manager. With Anaconda, you will get tons of the necessary packages for your data science, machine learning, or other computational tasks.To utilize Anaconda on Ubuntu 24.04, install the conda utility for your Python flavor. This post shares the steps for installing conda for Python 3, and we will install version 2024.2-1. Read on!

How to Install conda n Ubuntu 24.04

Anaconda is an open-source platform and by installing conda, you will have access to it and use it for any scientific computational tasks, such as machine learning. The beauty of Anaconda lies in its numerous scientific packages, ensuring you can freely use it for your project needs.

Installing conda on Ubuntu 24.04 follows a series of steps, which we’ve discussed in detail.

Step 1: Downloading the Anaconda Installer
When installing Anaconda, you should check and use the latest version of the installer script. You can access all the latest Anaconda3 installer scripts from the Anaconda Downloads Page.

As of writing this post, we have version 2024.2-1 as the latest version, and we can go ahead and download it using curl.

$ curl https://repo.anaconda.com/archive/Anaconda3-2024.2-1-Linux-x86_64.sh --output anaconda.sh

Ensure you change the version when using the above command. Also, navigate to where you want the installer script to be saved. In the above command, we’ve specified to save the installer as anaconda.sh, but you can use any preferred name.

The installer script is large and will take some time, depending on your network’s performance. Once the download is completed, verify the file is available using the ls command. Another crucial thing is to check the integrity of the installer script.
To do so, we’ve used the SHA-256 checksum by running the below command.

$ sha256sum anaconda.sh

Once you get the output, confirm that it matches against the available Anaconda3 hashes from the website. Once everything checks out, you can proceed with the installation.

Step 2: Run the conda Installer Script
Anaconda has an installer script that will take you through installing it. To run the bash script, execute the below command.

$ bash anaconda.sh

The script will trigger different prompts that will walk you through the installation. For instance, you must press the Enter key to confirm that you are okay with the installation.
Next, a document containing the lengthy Anaconda license agreement will open.

Please go through it, and once you reach the bottom, type yes to confirm that you agree with the license terms.

You must also specify where you want the installation to be installed. By default, the script selects a location in your home directory, which is okay in some instances. However, if you prefer a different location, specify it and press the enter key again to proceed with the process.

Conda will start installing, and the process will take a few minutes. In the end, you will get prompted to initialize Anaconda3. If you wish to initialize it later, choose ‘no.’ Otherwise, type ‘yes,’ as in our case.

That’s it! You will get an output thanking you for installing Anaconda3. This message confirms that the conda utility was installed successfully on Ubuntu 24.04, and you now have the green light to start using it.

Step 3: Activate the Installation and Test Anaconda3
Start by sourcing the ~/.bashrc with the below command.

$ source ~/.bashrc

Next, restart your shell to open up in the Anaconda3 base environment.
You can now check the installed conda version.

$ conda --version

Better yet, you can view all the available packages by listing them using the command below.

$ conda list

With that, you’ve installed Conda on Ubuntu 24.04. You can start working on your projects and maximize the power of Anaconda3 courtesy of its multiple packages.

Conclusion

Anaconda is installed by installing the conda command-line utility. To install conda, you must download its installer script, execute it, go through the installation prompts, and agree to the license terms. Once you complete the process, you can use Anaconda3 for your projects and leverage all the packages it offers.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.