Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. by: Always Sia Strike 2024-07-31T23:29:54-07:00 My the year is flying by. I haven’t written in a while - not for a lack of thoughts, but because time, life, probably could be time managing better but oh :whale:. We’re back though - so let’s talk work community. During this year’s Black in Data Week, there was a question during my session about how to get to know people organically and ask questions without fear when you start a new job. After sharing what has worked for me, the lady with the question came back to me with positive feedback that all the ideas were helpful. I didn’t think anything of it until Wellington, one of my friendlies from the app whose mama named it Twitter, twote this and had me thinking: He’s so right. No one is going to care about your career more than you do. However, one of the people who can make the effort to drive your development is your manager. Wellington and I had an additional exchange in which he echoed how important community is. This brought me back to June and that lady’s question during BID week - so I thought to share, in a less ephemeral format, what building a community at work looks like. About Chasing Management Before I share some tips, one sword I always fall on is - chase great management. If you can afford to extend a job search because you think you could get a better manager than the one who is offering you a job, do it. Managers are like a great orchestra during a fancy event. You don’t think about the background music when it’s playing and you’re eating your food (this is what I imagine from all those movies :joy:). But you will KNOW if it’s bad because something will sound off and irk your ears. When you are flying high and your manager is unblocking things, providing you chances to contribute, and running a smooth operation, you hardly think of them when you wake up in the morning - you just do your job. But if they’re not good at what they do, you could wake up in the morning thinking “ugh - I gotta go work with/for this person?”. It changes the temperature in the room. So if you can afford an extra two weeks on a job search to ask questions and get the best available manager on the market, consider investing in your mentals for the long term :heavy_exclamation_mark: I’m sure you’re like yeah great, Sia - how do I do that? Well not to toot toot, but here are some questions I like asking to learn a bit more about my potential new culture. Additionally, listen to one of my favorite humans and leaders, Taylor Poindexter, in this episode of the Code Newbie podcast talking about creating psychological safety at work (shout out to Saron and the team!). Taylor has been one of my champions at work and such a great manager for her team - I’m always a little envious I’m not on it :pleading_face: but I digress. Keep winning, my girl! Additionally, I’ll start here a list of the best leaders I know - either from personal experience working with and/or for them, interviewing to work on their teams, or from second hand knowledge of someone (I trust) else’s 1st hand experience. As of this writing, they will be listed with a workplace they’re currently in and only if they publicly share it on the internet. Taylor Poindexter, Engineering Manager II @ Spotify (Web, Full Stack Engineering) Angie Jones, Global VP of Developer Relations @ TBD/Block Kamana Sharma (Full Stack, Web, and Data Engineering) Nivia Henry, Director of Engineering Bryan Bischof (Data/ML/AI) Jasmine Vasandani (Data Science, Data Products) Dee Wolter (Accounting, Tax) Divya Narayanan (Engineering, ML) Dr. Russell Pierce (Data/ML/Computer Vision) Marlena Alhayani (Engineering) Andrew Cheong (Backend Engineering) - I’m still trying to convince him he’ll be the best leader ever, still an IC :joy: This is off the top of my head at 1:12am while watching a badminton match between Spain and US women round of 16, so I may have forgotten someone, my bad - will keep revisiting and updating as I remember and learn about more humans I aspire to work with. Now the kinda maybe not so good news - you cannot control your manager circumstances all the time. Reorgs, layoffs, people advancing and leaving companies happen. And if you’ve had the privilege of working with great managers, they will leave because they are top of the line so everyone wants to work with them. That’s where community matters. You can’t put all your career development eggs in one managerial basket. Noooooow let’s talk about how you can do that!! (I know, loooong tangent, but we’re getting there). Building Community at Work (Finally :roll_eyes:) Let’s start with the (should be but not always) obvious here - you are building genuine relationships. They therefore can’t be transactional. This is about creating a sustainable community that carries the load together, and not giving you tips on how to be the tick that takes from everyone without giving back. With that,… Find onboarding buddies There are people you started working on the same day with. They will likely have the most in common with you from a workplace perspective. If you happen to run into one of these folks, check in about what’s working and share tips that may have worked for you. When I first started working at my current job, I e-met Andy - a senior backend engineer. We chatted randomly in Slack the first few weeks while working on onboarding projects and found out that we would be working in sister orgs. Whenever I had questions, I’d ask him what he’s learning and every so often we’d “run into each other” in our team work slacks. Sometimes Andy would even help review PRs for me because I had to write Java, and ya girl does not live there. How sweet is that? Medium story short, that’s my work friend he a real good eng … you know the rest! Ask all the questions!! Remember that lady I told you about in the beginning? She had said (paraphrasing) Sia - I just got hired, how do I not look dumb asking questions and they just hired me? My response was they hired you for your skill on the market, not your knowledge of the company. You are expected to have a learning curve so take advantage of that to meet people by asking questions. If you have a Slack channel, activate those hidden helpers - they exist. You may know a lot about the coolest framework, but what about the review and releases process? What about how request for changes are handled? Maybe you see some code that seems off to you - it could be that it’s an intentional patch. The only way to know these idiosyncracies is to ask. I promise you someone else is also wondering, and by asking, you are Making it less scary for others to ask Increasing the knowledge sharing culture at your org/team/company Learning faster than you would if you tried to be your own hero (there’s a place and time, don’t overdo it when you’re new and waste time recreating a wheel) One of the best pieces of feedback I ever received at a workplace was that my curiosity and pace of learning is so fast. And to keep asking the questions. I’m summarizing here but that note was detailed and written so beautifully, it made me cry :sob:. It came from one of my favorite people who I have a 1:1 with in a few hours and who started out as … my first interviewer! Who interviewed you? Remember Andrew from my list of favorite leaders above? That’s who wrote that tearjerking note (one of many by the way). He was the person who gave my first technical screen when I was applying for my current job. After I got hired, I reached out and thanked him and hoped we would cross paths. And from above, you know now that he is also one of the best Slack helpers ever. Whenever I ask a question and see “Andrew is typing…”, I grab some tea and a snack because I’m about to learn something soooo well, the experience needs to be savoured. That first note to say, hey thank you for a great interview experience I made it has led to one of the best work sibling I’ve ever had. I also did the same with the recruiter and the engineering manager who did my behavioral interview. I should note - at my job, you don’t necessarily get interviewed with the teammates you’ll potentially work with. None of these folks have been my actual teammates, but we check in from time to time, and look out for each other. The manager was a machine learning engineering manager, Andrew is a backend person, I’m a data engineer - none of that matters. Community is multi-dimensional :heart: I got all my sister teams and me When you’re learning and onboarding, you get to meet your teammates and learn about your domain. It is likely your team is not working in a vacuum. Your customers are either other teams, or customers - which means you have to verify things with other teams to serve external customers. That’s a great way to form relationships. You are going to be seeing these folks a lot when you work together, you may as well set up a 1:1 for 20 minutes to meet and greet. It may not go anywhere in the beginning, but as you work on different projects, your conversations add up, you learn about each other’s ways of working and values (subconciously sometimes), and trade stories. It all adds up - that’s :sparkles: community :sparkles: Be nosy, Rosie Ok this last one is for the brave. As a hermit, I’m braver in writing vs in person so I use that to my advantage. This is an extension of asking all the questions beyond onboarding questions. You ever run into a document or see a presentation shared in a meeting, and you want to know more? You could reach out to the presenters and ask follow up questions, check in with your teammates about how said thing impacts/touches your team, or just learn something new that increases your t-shaped (breadth of) knowledge. Over time, this practice has a two-fold benefit. You get more context beyond your team which makes you more valuable in the long run because you end up living at the intersection of things and understand how everyone is connected. For me, whenever I’m in a meeting and someone says “our team is working on changing system X to start doing Y”, I’m able to see how that change affects multiple systems and teams, if there are folks who are not aware of the change who should know about it to plan ahead, and also how it changes planning for your team. This leads us back to our community thing because… You inadvertently build community by becoming someone your teammates and other teams (even leaders!) trust to translate information between squads or assist in unblocking inter-team or inter-org efforts. This is how I’ve been able to keep people in mind when thinking of projects and in turn they do the same. It also helped me get promoted as far as I’m concerned (earlier this year). You see, reader, I switched managers and teams a few months before performance review season. And the people in the room deciding on promotions were never my managers. They were all folks from other teams that I’d worked on projects with and because of the curiosity of understanding our intersections and being able to contribute to connected work, they knew enough about me to put their names on paper and say get that girl a bonus, promo, and title upgrade. I appreciate them dearly :heart: So what did we learn? All these things boil down to Finding your tribe from common contexts Leading with gratitude and having a teamwork mindset Staying curious a.k.a always be learning Play the long game and don’t be transactional in your interactions. Works every time. So as we now watch the 1500M men’s qualifiers of track and field at 3:13am, I hope you keep driving the car on your career and finding your tribe wherever it is you land. And congratulations to all your favorite Olympians!!
  2. Even on Linux, you can enjoy gaming and interact with fellow gamers via Steam. As a Linux gamer, Steam is a handy game distribution platform that allows you to install different games, including purchased ones. Moreover, with Steam, you can connect with other games and play multiplayer titles.Steam is a cross-platform game distribution platform that offers games the option of purchasing and installing games on any device through a Steam account. This post gives different options for installing Steam on Ubuntu 24.04. Different Methods of Installing Steam on Ubuntu 24.04No matter the Ubuntu version that you use, there are three easy ways of installing Steam. For our guide, we are working on Ubuntu 24.04, and we’ve detailed the steps to follow for each method. Take a look! Method 1: Install Steam via Ubuntu RepositoryOn your Ubuntu, Steam can be installed through the multiverse repository by following the steps below. Step 1: Add the Multiverse Repository The multiverse repository isn’t added on Ubuntu by default but executing the following command will add it. $ sudo add-apt-multiverse Step 2: Refresh the Package Index After adding the new repository, we must refresh the package index before we can install Steam. $ sudo apt update Step 3: Install Steam Lastly, install Steam from the repository by running the APT command below. $ sudo apt install steam Method 2: Install Steam as a SnapSteam is available as a snap package and you can install it by accessing the Ubuntu 24.04 App Center or by installing via command-line. To install it via GUI, use the below steps. Step 1: Search for Steam on App CenterOn your Ubuntu, open the App Center and search for “Steam” in the search box. Different results will open and the first one is what we want to install. Step 2: Install SteamOn the search results page, click on Steam to open a window showing a summary of its information. Locate the green Install button and click on it. You will get prompted to enter your password before the installation can begin. Once you do so, a window showing the progress bar of the installation process will appear. Once the process completes, you will have Steam installed and ready for use on your Ubuntu 24.04. Alternatively, if you prefer using the command-line option to install Steam from App Center, you can do so using the snap command. Specify the package when running your command as shown below. $ sudo snap install steam On the output, the download and installation progress will be shown and once it completes, Steam will be available from your applications. You can open it and set it up for your gaming. Method 3: Download and Install the Steam PackageSteam releases a .deb package for Linux and by downloading it, you can use it to install Steam. Unlike the previous methods, this method requires downloading the Steam package from its website using command line utilities such as wget or curl. Step 1: Install wgetTo download the Steam .deb package, we will use wget. You can skip this step if you already have it installed. Otherwise, execute the below command. $ sudo apt install wget Step 2: Download the Steam PackageWith wget installed, run the following command to download the Steam .deb package. $ wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb Step 3: Install SteamTo install the .deb package, we will use the dpkg command below. $ sudo dpkg -i steam.deb Once Steam completes installing, verify that you can access it by searching for it on your Ubuntu 24.04. With that, you now have Steam installed on Ubuntu. ConclusionSteam is handy tool for any gamer and its cross-platform nature means you can install it on Ubuntu 24.04. we’ve given three installation methods you can use depending on your preference. Once you’ve installed Steam, configure it and create your account to start utilizing it. Happy gaming!
  3. Proxmox VE 8 is one of the best open-source and free Type-I hypervisors out there for running QEMU/KVM virtual machines (VMs) and LXC containers. It has a nice web management interface and a lot of features. One of the most amazing features of Proxmox VE is that it can passthrough PCI/PCIE devices (i.e. an NVIDIA GPU) from your computer to Proxmox VE virtual machines (VMs). The PCI/PCIE passthrough is getting better and better with newer Proxmox VE releases. At the time of this writing, the latest version of Proxmox VE is Proxmox VE v8.1 and it has great PCI/PCIE passthrough support. In this article, I am going to show you how to configure your Proxmox VE 8 host/server for PCI/PCIE passthrough and configure your NVIDIA GPU for PCIE passthrough on Proxmox VE 8 virtual machines (VMs).   Table of Contents Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Installing Proxmox VE 8 Enabling Proxmox VE 8 Community Repositories Installing Updates on Proxmox VE 8 Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard Enabling IOMMU on Proxmox VE 8 Verifying if IOMMU is Enabled on Proxmox VE 8 Loading VFIO Kernel Modules on Proxmox VE 8 Listing IOMMU Groups on Proxmox VE 8 Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Conclusion References   Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Before you can install Proxmox VE 8 on your computer/server, you must enable the hardware virtualization feature of your processor from the BIOS/UEFI firmware of your motherboard. The process is different for different motherboards. So, if you need any assistance in enabling hardware virtualization on your motherboard, read this article.   Installing Proxmox VE 8 Proxmox VE 8 is free to download, install, and use. Before you get started, make sure to install Proxmox VE 8 on your computer. If you need any assistance on that, read this article.   Enabling Proxmox VE 8 Community Repositories Once you have Proxmox VE 8 installed on your computer/server, make sure to enable the Proxmox VE 8 community package repositories. By default, Proxmox VE 8 enterprise package repositories are enabled and you won’t be able to get/install updates and bug fixes from the enterprise repositories unless you have bought Proxmox VE 8 enterprise licenses. So, if you want to use Proxmox VE 8 for free, make sure to enable the Proxmox VE 8 community package repositories to get the latest updates and bug fixes from Proxmox for free.   Installing Updates on Proxmox VE 8 Once you’ve enabled the Proxmox VE 8 community package repositories, make sure to install all the available updates on your Proxmox VE 8 server.   Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard The IOMMU configuration is found in different locations in different motherboards. To enable IOMMU on your motherboard, read this article.   Enabling IOMMU on Proxmox VE 8 Once the IOMMU is enabled on the hardware side, you also need to enable IOMMU from the software side (from Proxmox VE 8). To enable IOMMU from Proxmox VE 8, you have the add the following kernel boot parameters: Processor Vendor Kernel boot parameters to add Intel intel_iommu=on, iommu=pt AMD iommu=pt   To modify the kernel boot parameters of Proxmox VE 8, open the /etc/default/grub file with the nano text editor as follows: $ nano /etc/default/grub   At the end of the GRUB_CMDLINE_LINUX_DEFAULT, add the required kernel boot parameters for enabling IOMMU depending on the processor you’re using. As I am using an AMD processor, I have added only the kernel boot parameter iommu=pt at the end of the GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/default/grub file.   Now, update the GRUB boot configurations with the following command: $ update-grub2   Once the GRUB boot configurations are updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Verifying if IOMMU is Enabled on Proxmox VE 8 To verify whether IOMMU is enabled on Proxmox VE 8, run the following command: $ dmesg | grep -e DMAR -e IOMMU   If IOMMU is enabled, you will see some outputs confirming that IOMMU is enabled. If IOMMU is not enabled, you may not see any outputs.   You also need to have the IOMMU Interrupt Remapping enabled for PCI/PCIE passthrough to work. To check if IOMMU Interrupt Remapping is enabled on your Proxmox VE 8 server, run the following command: $ dmesg | grep 'remapping'   As you can see, IOMMU Interrupt Remapping is enabled on my Proxmox VE 8 server. NOTE: Most modern AMD and Intel processors will have IOMMU Interrupt Remapping enabled. If for any reason, you don’t have IOMMU Interrupt Remapping enabled, there’s a workaround. You have to enable Unsafe Interrupts for VFIO. Read this article for more information on enabling Unsafe Interrupts on your Proxmox VE 8 server.   Loading VFIO Kernel Modules on Proxmox VE 8 The PCI/PCIE passthrough is done mainly by the VFIO (Virtual Function I/O) kernel modules on Proxmox VE 8. The VFIO kernel modules are not loaded at boot time by default on Proxmox VE 8. But, it’s easy to load the VFIO kernel modules at boot time on Proxmox VE 8. First, open the /etc/modules-load.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modules-load.d/vfio.conf   Type in the following lines in the /etc/modules-load.d/vfio.conf file. vfio vfio_iommu_type1 vfio_pci   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes.   Now, update the initramfs of your Proxmox VE 8 installation with the following command: $ update-initramfs -u -k all   Once the initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Once your Proxmox VE 8 server boots, you should see that all the required VFIO kernel modules are loaded. $ lsmod | grep vfio   Listing IOMMU Groups on Proxmox VE 8 To passthrough PCI/PCIE devices on Proxmox VE 8 virtual machines (VMs), you will need to check the IOMMU groups of your PCI/PCIE devices quite frequently. To make checking for IOMMU groups easier, I decided to write a shell script (I got it from GitHub, but I can’t remember the name of the original poster) in the path /usr/local/bin/print-iommu-groups so that I can just run print-iommu-groups command and it will print the IOMMU groups on the Proxmox VE 8 shell.   First, create a new file print-iommu-groups in the path /usr/local/bin and open it with the nano text editor as follows: $ nano /usr/local/bin/print-iommu-groups   Type in the following lines in the print-iommu-groups file: #!/bin/bash shopt -s nullglob for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do echo "IOMMU Group ${g##*/}:" for d in $g/devices/*; do echo -e "\t$(lspci -nns ${d##*/})" done; done;   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes to the print-iommu-groups file.   Make the print-iommu-groups script file executable with the following command: $ chmod +x /usr/local/bin/print-iommu-groups   Now, you can run the print-iommu-groups command as follows to print the IOMMU groups of the PCI/PCIE devices installed on your Proxmox VE 8 server: $ print-iommu-groups   As you can see, the IOMMU groups of the PCI/PCIE devices installed on my Proxmox VE 8 server are printed.   Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) To passthrough a PCI/PCIE device to a Proxmox VE 8 virtual machine (VM), it must be in its own IOMMU group. If 2 or more PCI/PCIE devices share an IOMMU group, you can’t passthrough any of the PCI/PCIE devices of that IOMMU group to any Proxmox VE 8 virtual machines (VMs). So, if your NVIDIA GPU and its audio device are on its own IOMMU group, you can passthrough the NVIDIA GPU to any Proxmox VE 8 virtual machines (VMs). On my Proxmox VE 8 server, I am using an MSI X570 ACE motherboard paired with a Ryzen 3900X processor and Gigabyte RTX 4070 NVIDIA GPU. According to the IOMMU groups of my system, I can passthrough the NVIDIA RTX 4070 GPU (IOMMU Group 21), RTL8125 2.5Gbe Ethernet Controller (IOMMU Group 20), Intel I211 Gigabit Ethernet Controller (IOMMU Group 19), a USB 3.0 controller (IOMMU Group 24), and the Onboard HD Audio Controller (IOMMU Group 25). $ print-iommu-groups   As the main focus of this article is configuring Proxmox VE 8 for passing through the NVIDIA GPU to Proxmox VE 8 virtual machines, the NVIDIA GPU and its Audio device must be in its own IOMMU group.   Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 To passthrough a PCI/PCIE device on a Proxmox VE 8 virtual machine (VM), you must make sure that Proxmox VE forces it to use the VFIO kernel module instead of its original kernel module. To find out the kernel module your PCI/PCIE devices are using, you will need to know the vendor ID and device ID of these PCI/PCIE devices. You can find the vendor ID and device ID of the PCI/PCIE devices using the print-iommu-groups command. $ print-iommu-groups   For example, the vendor ID and device ID of my NVIDIA RTX 4070 GPU is 10de:2786 and it’s audio device is 10de:22bc.   To find the kernel module a PCI/PCIE device 10de:2786 (my NVIDIA RTX 4070 GPU) is using, run the lspci command as follows: $ lspci -v -d 10de:2786   As you can see, my NVIDIA RTX 4070 GPU is using the nvidiafb and nouveau kernel modules by default. So, they can’t be passed to a Proxmox VE 8 virtual machine (VM) at this point.   The Audio device of my NVIDIA RTX 4070 GPU is using the snd_hda_intel kernel module. So, it can’t be passed on a Proxmox VE 8 virtual machine at this point either. $ lspci -v -d 10de:22bc   So, to passthrough my NVIDIA RTX 4070 GPU and its audio device on a Proxmox VE 8 virtual machine (VM), I must blacklist the nvidiafb, nouveau, and snd_hda_intel kernel modules and configure my NVIDIA RTX 4070 GPU and its audio device to use the vfio-pci kernel module.   Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 To blacklist kernel modules on Proxmox VE 8, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the kernel modules nouveau, nvidiafb, and snd_hda_intel kernel modules (to passthrough NVIDIA GPU), add the following lines in the /etc/modprobe.d/blacklist.conf file: blacklist nouveau blacklist nvidiafb blacklist snd_hda_intel   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/blacklist.conf file.   Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 To configure the PCI/PCIE device (i.e. your NVIDIA GPU) to use the VFIO kernel module, you need to know their vendor ID and device ID. In this case, the vendor ID and device ID of my NVIDIA RTX 4070 GPU and its audio device are 10de:2786 and 10de:22bc.   To configure your NVIDIA GPU to use the VFIO kernel module, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure your NVIDIA GPU and its audio device with the <vendor-id>:<device-id> 10de:2786 and 10de:22bc (let’s say) respectively to use the VFIO kernel module, add the following line to the /etc/modprobe.d/vfio.conf file. options vfio-pci ids=10de:2786,10de:22bc   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/vfio.conf file.   Now, update the initramfs of Proxmove VE 8 with the following command: $ update-initramfs -u -k all   Once initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Once your Proxmox VE 8 server boots, you should see that your NVIDIA GPU and its audio device (10de:2786 and 10de:22bc in my case) are using the vfio-pci kernel module. Now, your NVIDIA GPU is ready to be passed to a Proxmox VE 8 virtual machine. $ lspci -v -d 10de:2786 $ lspci -v -d 10de:22bc   Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Now that your NVIDIA GPU is ready for passthrough on Proxmox VE 8 virtual machines (VMs), you can passthrough your NVIDIA GPU on your desired Proxmox VE 8 virtual machine and install the NVIDIA GPU drivers depending on the operating system that you’re using on that virtual machine as usual. For detailed information on how to passthrough your NVIDIA GPU on a Proxmox VE 8 virtual machine (VM) with different operating systems installed, read one of the following articles: How to Passthrough an NVIDIA GPU to a Windows 11 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Ubuntu 24.04 LTS Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a LinuxMint 21 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Debian 12 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to an Elementary OS 8 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Fedora 39+ Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on an Arch Linux Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on a Red Hat Enterprise Linux 9 (RHEL 9) Proxmox VE 8 Virtual Machine (VM)   Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Even after trying everything listed in this article correctly, if PCI/PCIE passthrough still does not work for you, be sure to try out some of the Proxmox VE PCI/PCIE passthrough tricks and/or workarounds that you can use to get PCI/PCIE passthrough work on your hardware.   Conclusion In this article, I have shown you how to configure your Proxmox VE 8 server for PCI/PCIE passthrough so that you can passthrough PCI/PCIE devices (i.e. your NVIDIA GPU) to your Proxmox VE 8 virtual machines (VMs). I have also shown you how to find out the kernel modules that you need to blacklist and how to blacklist them for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine. Finally, I have shown you how to configure your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to use the VFIO kernel modules, which is also an essential step for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine (VM).   References PCI(e) Passthrough – Proxmox VE PCI Passthrough – Proxmox VE The ultimate gaming virtual machine on proxmox – YouTube
  4. Anyone can easily run multiple operating systems on one host simultaneously, provided they have VirtualBox installed. Even for Ubuntu 24.04, you can install VirtualBox and utilize it to run any supported operating system.The best part about VirtualBox is that it is open-source virtualization software, and you can install and use it anytime. Whether you are stuck on how to install VirtualBox on Ubuntu 24.04 or looking to advance with other operating systems on top of your host, this post gives you two easy methods. Two Methods of Installing VirtualBox on Ubuntu 24.04 There are different ways of installing VirtualBox on Ubuntu 24.04. For instance, you can retrieve a stable VirtualBox version from Ubuntu’s repository or add Oracle’s VirtualBox repository to install a specific version. Which method to use will depend on your requirements, and we’ve discussed the methods in the sections below. Method 1: Install VirtualBox via APT The easiest way of installing VirtualBox on Ubuntu 24.04 is by sourcing it from the official Ubuntu repository using APT. Below are the steps you should follow. Step 1: Update the Repository In every installation, the first step involves refreshing the source list to update the package index by executing the following command. $ sudo apt update Step 2: Install VirtualBox Once you’ve updated your package index, the next task is to run the install command below to fetch and install the VirtualBox package. $ sudo apt install virtualbox Step 3: Verify the Installation After the installation, use the following command to check the installed version. The output also confirms that you successfully installed VirtualBox on Ubuntu 24.04. $ VboxManage --version Method 2: Install VirtualBox from Oracle’s Repository The previous method shows that we installed VirtualBox version 7.0.14. However, if you visit the VirtualBox website, depending on when you read this post, it’s likely that the version we’ve installed may not be the latest. Although the older VirtualBox versions are okay, installing the latest version is always the better option as it contains all patches and fixes. However, to install the latest version, you must add Oracle’s repository to your Ubuntu before you can execute the install command. Step 1: Install Prerequisites All the dependencies you require before you can add the Oracle VirtualBox repository can be installed when you install the software-properties-common package. $ sudo apt install software-properties-common Step 2: Add GPG Keys GPG keys help verify the authenticity of repositories before we can add them to the system. The Oracle repository is a third-party repository, and by installing the GPG keys, it will be checked for integrity and authenticity. Here’s how you add the GPG keys. $ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - You will receive an output on your terminal showing that the key has been downloaded and installed. Step 3: Add Oracle’s VirtualBox Repository Oracle has a VirtualBox repository for all supported Operating Systems. To fetch this repository and add it to your /etc/apt/sources.list.d/, execute the following command. $ echo "deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list The output shows that a new repository entry has been created from which we will source VirtualBox when we execute the install command. Step 4: Install VirtualBox With the repository added, let’s first refresh the package index by updating it. $ sudo apt update Next, specify which VirtualBox you want to install using the below syntax. $ sudo apt install virtualbox-[version] For instance, if the latest version when reading this post is version 7.1, you would replace version in the above command with 7.1. However, ensure that the specified version is available on the VirtualBox website. Otherwise, you will get an error as you can’t install something that can’t be found. Conclusion VirtualBox is an effective way of running numerous Operating Systems on one host simultaneously. This post shares two methods of installing VirtualBox on Ubuntu 24.04. First, you can install it via APT by sourcing it from the Ubuntu repository. Alternatively, you can add the Oracle repository and specify a specific version number for the VirtualBox you want to install.
  5. In recent years, support for PCI/PCIE (i.e. GPU passthrough) has improved a lot in newer hardware. So, the regular Proxmox VE PCI/PCIE and GPU passthrough guide should work in most new hardware. Still, you may face many problems passing through GPUs and other PCI/PCIE devices on a Proxmox VE virtual machine. There are many tweaks/fixes/workarounds for some of the common Proxmox VE GPU and PCI/PCIE passthrough problems. In this article, I am going to discuss some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.   Table of Contents What to do if IOMMU Interrupt Remapping is not Supported? What to do if My GPU (or PCI/PCIE Device) is not in its own IOMMU Group? How do I Blacklist AMD GPU Drivers on Proxmox VE? How do I Blacklist NVIDIA GPU Drivers on Proxmox VE? How do I Blacklist Intel GPU Drivers on Proxmox VE? How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE? I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why? Why Disable VGA Arbitration for the GPUs and How to Do It? What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO? GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why? What is AMD Vendor Reset Bug and How to Solve it? How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine? What to do If Some Apps Crash the Proxmox VE Windows Virtual Machine? How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?. How to Update Proxmox VE initramfs? How to Update Proxmox VE GRUB Bootloader? Conclusion References   What to do If IOMMU Interrupt Remapping is not Supported? For PCI/PCIE passthrough, IOMMU interrupt remapping is essential. To check whether your processor supports IOMMU interrupt remapping, run the command below: $ dmesg | grep -i remap   If your processor supports IOMMU interrupt remapping, you will see some sort of output confirming that interrupt remapping is enabled. Otherwise, you will see no outputs. If IOMMU interrupt remapping is not supported on your processor, you will have to configure unsafe interrupts on your Proxmox VE server to passthrough PCI/PCIE devices on Proxmox VE virtual machines. To configure unsafe interrupts on Proxmox VE, create a new file iommu_unsafe_interrupts.conf in the /etc/modprobe.d directory and open it with the nano text editor as follows: $ nano /etc/modprobe.d/iommu_unsafe_interrupts.conf   Add the following line in the iommu_unsafe_interrupts.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. options vfio_iommu_type1 allow_unsafe_interrupts=1   Once you’re done, you must update the initramfs of your Proxmox VE server.   What to do if my GPU (or PCI/PCIE Device) is not in its own IOMMU Group? If your server has multiple PCI/PCIE slots, you can move the GPU to a different PCI/PCIE slot and see if the GPU is in its own IOMMU group. If that does not work, you can try enabling the ACS override kernel patch on Proxmox VE. To try enabling the ACS override kernel patch on Proxmox VE, open the /etc/default/grub file with the nano text editor as follows: $ nano /etc/default/grub   Add the kernel boot option pcie_acs_override=downstream at the end of the GRUB_CMDLINE_LINUX_DEFAULT. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect. You should have better IOMMU grouping once your Proxmox VE server boots. If your GPU still does not have its own IOMMU group, you can go one step further by using the pcie_acs_override=downstream,multifunction instead. You should have an even better IOMMU grouping.   If pcie_acs_override=downstream,multifunction results in better IOMMU grouping that pcie_acs_override=downstream, then why use pcie_acs_override=downstream at all? Well, the purpose of PCIE ACS override is to fool the kernel into thinking that the PCIE devices are isolated when they are not in reality. So, PCIE ACS override comes with security and stability issues. That’s why you should try using a less aggressive PCIE ACS override option pcie_acs_override=downstream first and see if your problem is solved. If pcie_acs_override=downstream does not work, only then you should use the more aggressive option pcie_acs_override=downstream,multifunction.   How do I Blacklist AMD GPU Drivers on Proxmox VE? If you want to passthrough an AMD GPU on Proxmox VE virtual machines, you must blacklist the AMD GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the AMD GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist radeon blacklist amdgpu   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How do I Blacklist NVIDIA GPU Drivers on Proxmox VE? If you want to passthrough an NVIDIA GPU on Proxmox VE virtual machines, you must blacklist the NVIDIA GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the NVIDIA GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist nouveau blacklist nvidia blacklist nvidiafb blacklist nvidia_drm   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How do I Blacklist Intel GPU Drivers on Proxmox VE? If you want to passthrough an Intel GPU on Proxmox VE virtual machines, you must blacklist the Intel GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the Intel GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist snd_hda_intel blacklist snd_hda_codec_hdmi blacklist i915   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE? To check if your GPU or desired PCI/PCIE devices are using the VFIO driver, run the following command: $ lspci -v   If your GPU or PCI/PCIE device is using the VFIO driver, you should see the line Kernel driver in use: vfio-pci as marked in the screenshot below.   I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the AMD GPU drivers is not enough, you also have to configure the AMD GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the AMD GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep radeon pre: vfio-pci softdep amdgpu pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the NVIDIA GPU drivers is not enough, you also have to configure the NVIDIA GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the NVIDIA GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep nouveau pre: vfio-pci softdep nvidia pre: vfio-pci softdep nvidiafb pre: vfio-pci softdep nvidia_drm pre: vfio-pci softdep drm pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the Intel GPU drivers is not enough, you also have to configure the Intel GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the Intel GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep snd_hda_intel pre: vfio-pci softdep snd_hda_codec_hdmi pre: vfio-pci softdep i915 pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why? In the /etc/modprobe.d/vfio.conf file, you must add the IDs of all the PCI/PCIE devices that you want to use the VFIO driver in a single line. One device per line won’t work. For example, if you have 2 GPUs that you want to configure to use the VFIO driver, you must add their IDs in a single line in the /etc/modprobe.d/vfio.conf file as follows: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>   If you want to add another GPU to the list, just append it at the end of the existing vfio-pci line in the /etc/modprobe.d/vfio.conf file as follows: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>,<GPU-3>,<GPU-3-Audio>   Never do this. Although it looks much cleaner, it won’t work. I do wish we could specify PCI/PCIE IDs this way. options vfio-pci ids=<GPU-1>,<GPU-1-Audio> options vfio-pci ids=<GPU-2>,<GPU-2-Audio> options vfio-pci ids=<GPU-3>,<GPU-3-Audio>   Why Disable VGA Arbitration for the GPUs and How to Do It? If you’re using UEFI/OVMF BIOS on the Proxmox VE virtual machine where you want to passthrough the GPU, you can disable VGA arbitration which will reduce the legacy codes required during boot. To disable VGA arbitration for the GPUs, add disable_vga=1 at the end of the vfio-pci option in the /etc/modprobe.d/vfio.conf file as shown below: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio> disable_vga=1   What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO? Even after doing everything correctly, if your GPU still does not use the VFIO driver, you will need to try booting Proxmox VE with kernel options that disable the video framebuffer. On Proxmox VE 7.1 and older, the nofb nomodeset video=vesafb:off video=efifb:off video=simplefb:off kernel options disable the GPU framebuffer for your Proxmox VE server. On Proxmox VE 7.2 and newer, the initcall_blacklist=sysfb_init kernel option does a better job at disabling the GPU framebuffer for your Proxmox VE server. Open the GRUB bootloader configuration file /etc/default/grub file with the nano text editor with the following command: $ nano /etc/default/grub   Add the kernel option initcall_blacklist=sysfb_init at the end of the GRUB_CMDLINE_LINUX_DEFAULT. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.   GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why? Once you’ve passed a GPU to a Proxmox VE virtual machine, make sure to use the Default Graphics card before you start the virtual machine. This way, you will be able to access the display of the virtual machine from the Proxmox VE web management UI, download the GPU driver installer on the virtual machine, and install it on the virtual machine. Once the GPU driver is installed on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU that you’ve passed to the virtual machine as well.   Once the GPU driver is installed on the virtual machine and the screen of the virtual machine is displayed on the monitor connected to the GPU (passed to the virtual machine), power off the virtual machine and set the Display Graphic card of the virtual machine to none. Once you’re set, the next time you power on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU (passed to the virtual machine) only, nothing will be displayed on the Proxmox VE web management UI. This way, you will have the same experience as using a real computer even though you’re using a virtual machine.   Remember, never use SPICE, VirtIO GPU, and VirGL GPU Display Graphic card on the Proxmox VE virtual machine that you’re configuring for GPU passthrough as it has a high chance of failure.   What is AMD Vendor Reset Bug and How to Solve it? AMD GPUs have a well-known bug called “vendor reset bug”. Once an AMD GPU is passed to a Proxmox VE virtual machine, and you power off this virtual machine, you won’t be able to use the AMD GPU in another Proxmox VE virtual machine. At times, your Proxmox VE server will become unresponsive as a result. This is called the “vendor reset bug” of AMD GPUs. The reason this happens is that AMD GPUs can’t reset themselves correctly after being passed to a virtual machine. To fix this problem, you will have to reset your AMD GPU properly. For more information on installing the AMD vendor reset on Proxmox VE, read this article and read this thread on Proxmox VE forum. Also, check the vendor reset GitHub page.   How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine? If you’ve installed the GPU on the first slot of your motherboard, you might not be able to passthrough the GPU in a Proxmox VE virtual machine by default. Some motherboards shadow the vBIOS of the GPU installed on the first slot by default which is the reason the GPU installed on the first slot of those motherboards can’t be passed to virtual machines. The solution to this problem is to install the GPU on the second slot of the motherboard, extract the vBIOS of the GPU, install the GPU on the first slot of the motherboard, and passthrough the GPU to a Proxmox VE virtual machine along with the extracted vBIOS of the GPU. NOTE: To learn how to extract the vBIOS of your GPU, read this article. Once you’ve obtained the vBIOS for your GPU, you must store the vBIOS file in the /usr/share/kvm/ directory of your Proxmox VE server to access it. Once the vBIOS file for your GPU is stored in the /usr/share/kvm/ directory, you need to configure your virtual machine to use it. Currently, there is no way to specify the vBIOS file for PCI/PCIE devices of Proxmox VE virtual machines from the Proxmox VE web management UI. So, you will have to do everything from the Proxmox VE shell/command-line. You can find the Proxmox VE virtual machine configuration files in the /etc/pve/qemu-server/ directory of your Proxmox VE server. Each Proxmox VE virtual machine has one configuration file in this directory in the format <VM-ID>.conf. For example, to open the Proxmox VE virtual machine configuration file (for editing) for the virtual machine ID 100, you will need to run the following command: $ nano /etc/pve/qemu-server/100.conf   In the virtual machine configuration file, you will need to append romfile=<vBIOS-filename> in the hostpciX line which is responsible for passing the GPU on the virtual machine. For example, if the vBIOS filename for my GPU is gigabyte-nvidia-1050ti.bin, and I have passed the GPU on the first slot (slot 0) of the virtual machine (hostpci0), then in the 100.conf file, the line should be as follows: hostpci0: <PCI-ID-of-GPU>,x-vga=on,romfile=gigabyte-nvidia-1050ti.bin   Once you’re done, save the virtual machine configuration file by pressing <Ctrl> + X followed by Y and <Enter>, start the virtual machine, and check if the GPU passthrough is working.   What to do if Some Apps Crash the Proxmox VE Windows Virtual Machine? Some apps such as GeForce Experience, Passmark, etc. might crash Proxmox VE Windows virtual machines. You might also experience a sudden blue screen of death (BSOD) on your Proxmox VE Windows virtual machines. The reason it happens is that the Windows virtual machine might try to access the model-specific registers (MSRs) that are not actually available and depending on how your hardware handles MSRs requests, your system might crash. The solution to this problem is ignoring MSRs messages on your Proxmox VE server. To configure MSRs on your Proxmox VE server, open the /etc/modprobe.d/kvm.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/kvm.conf   To ignore MSRs on your Proxmox VE server, add the following line to the /etc/modprobe.d/kvm.conf file. options kvm ignore_msrs=1   Once MSRs are ignored, you might see a lot of MSRs warning messages in your dmesg system log. To avoid that, you can ignore MSRs as well as disable logging MSRs warning messages by adding the following line instead: options kvm ignore_msrs=1 report_ignored_msrs=0   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/kvm.conf file and update the initramfs of your Proxmox VE server for the changes to take effect.   How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines? If you’ve passed the GPU to a Linux Proxmox VE virtual machine and you’re getting bad audio quality on the virtual machine, you will need to enable MSI (Message Signal Interrupt) for the audio device on the Proxmox VE virtual machine. To enable MSI on the Linux Proxmox VE virtual machine, open the /etc/modprobe.d/snd-hda-intel.conf file with the nano text editor on the virtual machine with the following command: $ sudo nano /etc/modprobe.d/snd-had-intel.conf   Add the following line and save the file by pressing <Ctrl> + X followed by Y and <Enter>. options snd-hda-intel enable_msi=1   For the changes to take effect, reboot the Linux virtual machine with the following command: $ sudo reboot   Once the virtual machine boots, check if MSI is enabled for the audio device with the following command: $ sudo lspci -vv   If MSI is enabled for the audio device on the virtual machine, you should see the marked line in the audio device information.   How to Update Proxmox VE initramfs? Every time you make any changes to files in the /etc/modules-load.d/ and /etc/modprobe.d/ directories, you must update the initramfs of your Proxmox VE 8 installation with the following command: $ update-initramfs -u -k all   Once Proxmox VE initramfs is updated, reboot your Proxmox VE server for the changes to take effect. $ reboot   How to Update Proxmox VE GRUB Bootloader? Every time you update the Proxmox VE GRUB boot configuration file /etc/default/grub, you must update the GRUB bootloader for the changes to take effect. To update the Proxmox VE GRUB bootloader with the new configurations, run the following command: $ update-grub2   Once the GRUB bootloader is updated with the new configuration, reboot your Proxmox VE server for the changes to take effect. $ reboot   Conclusion In this article, have discussed some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.   References [TUTORIAL] – PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration | Proxmox Support Forum Ultimate Beginner’s Guide to Proxmox GPU Passthrough Reading and Writing Model Specific Registers in Linux The MSI Driver Guide HOWTO — The Linux Kernel documentation    
  6. Proxmox VE (Virtualization Environment) is an open-source enterprise virtualization and containerization platform. It has a built-in user-friendly web interface for managing virtual machines and LXC containers. It has other features such as Ceph software-defined storage (SDS), software-defined networking (SDN), high availability (HA) clustering, and many more. After the recent Broadcom acquisition of VMware, the cost of VMware products has risen to the point that many small to medium-sized companies are/will be forced to switch to alternate products. Even the free VMware ESXi is discontinued which is bad news for homelab users. Proxmox VE is one of the best alternatives to VMware vSphere and it has the same set of features as VMware vSphere (with a few exceptions of course). Proxmox VE is open-source and free, which is great for home labs as well as businesses. Proxmox VE also has an optional enterprise subscription option that you can purchase if needed. In this article, I will show you how to install Proxmox VE 8 on your server. I will cover Graphical UI-based installation methods of Proxmox VE and Terminal UI-based installation for systems having problems with the Graphical UI-based installer.   Table of Contents Booting Proxmox VE 8 from a USB Thumb Drive Installing Proxmox VE 8 using Graphical UI Installing Proxmox VE 8 using Terminal UI Accessing Proxmox VE 8 Management UI from a Web Browser Enabling Proxmox VE Community Package Repositories Keeping Proxmox VE Up-to-date Conclusion References   Booting Proxmox VE 8 from a USB Thumb Drive First, you need to download the Proxmox VE 8 ISO image and create a bootable USB thumb drive of Proxmox VE 8. If you need any assistance on that, read this article. Once you’ve created a bootable USB thumb drive of Proxmox VE 8, power off your server, insert the bootable USB thumb drive on your server, and boot the Proxmox VE 8 installer from it. Depending on the motherboard manufacturer, you need to press a certain key after pressing the power button to boot from the USB thumb drive. If you need any assistance on booting your server from a USB thumb drive, read this article. Once you’ve successfully booted from the USB thumb drive, the Proxmox VE GRUB menu should be displayed.   Installing Proxmox VE 8 using Graphical UI To install Proxmox VE 8 using a graphical user interface, select Install Proxmox VE (Graphical) from the Proxmox VE GRUB menu and press <Enter>.   The Proxmox VE installer should be displayed. Click on I agree.   Now, you have to configure the disk for the Proxmox VE installation. You can configure the disk for Proxmox VE installation in different ways: If you have a single 500GB/1TB (or larger capacity) SSD/HDD on your server, you can use it for Proxmox VE installation as well as storing virtual machine images, container images, snapshots, backups, ISO images, and so on. That’s not very safe, but you can try out Proxmox this way without needing a lot of hardware resources. You can use a small 64GB or 128GB SSD for Proxmox VE installation only. Once Proxmox VE is installed, you can create additional storage pools for storing virtual machine images, container images, snapshots, backups, ISO images, and so on. You can create a big ZFS or BTRFS RAID for Proxmox VE installation which will also be used for storing virtual machine images, container images, snapshots, backups, ISO images, and so on.   a) To install Proxmox VE on a single SSD/HDD and also use the SSD/HDD for storing virtual machine and container images, ISO images, virtual machine and container snapshots, virtual machine and container backups, etc., select the SSD/HDD from the Target Harddisk dropdown menu[1] and click on Next[2]. Proxmox VE will use a small portion of the free disk space for the Proxmox VE root filesystem and the rest of the disk space will be used for storing virtual machine and container data.   If you want to change the filesystem of your Proxmox VE installation or configure the size of different Proxmox VE partitions/storages, select the HDD/SSD you want to use for your Proxmox VE installation from the Target Harddisk dropdown menu and click on Options.   An advanced disk configuration window should be displayed. From the Filesystem dropdown menu, select your desired filesystem. ext4 and xfs filesystems are supported for single-disk Proxmox VE installation at the time of this writing[1]. Other storage configuration parameters are: hdsize[2]: By default Proxmox VE will use all the disk space of the selected HDD/SSD. To keep some disk space free on the selected HDD/SSD, type in the amount of disk space (in GB) that you want Proxmox VE to use and the rest of the disk space should be free. swapsize[3]: By default, Proxmox VE will use 4GB to 8GB of disk space for swap depending on the amount of memory/RAM you have installed on the server. To set a custom swap size for Proxmox VE, type in your desired swap size (in GB unit) here. maxroot[4]: Defines the maximum disk space to use for the Proxmox VE LVM root volume/filesystem. minfree[5]: Defines the minimum disk space that must be free in the Proxmox VE LVM volume group (VG). This space will be used for LVM snapshots. maxvz[6]: Defines the maximum disk space to use for the Proxmox VE LVM data volume where virtual machine and container data/images will be stored. Once you’re done with the disk configuration, click on OK[7].   To install Proxmox VE on disk with your desired storage configuration, click on Next.   b) To install Proxmox VE on a small SSD and create the necessary storage for the virtual machine and container data later, select the SSD from the Target Harddisk dropdown menu[1] and click on Options[2].   Set maxvz to 0 to disable virtual machine and container storage on the SSD where Proxmox VE will be installed and click on OK.   Once you’re done, click on Next.   c) To create a ZFS or BTRFS RAID and install Proxmox VE on the RAID, click on Options.   You can pick different ZFS and BTRFS RAID types from the Filesystem dropdown menu. Each of these RAID types works differently and requires a different number of disks. For more information on how different RAID types work, their requirements, features, data safety, etc, read this article.   RAID0, RAID1, and RAID10 are discussed in this article thoroughly. RAIDZ-1 and RAIDZ-2 work in the same way as RAID5 and RAID6 respectively. RAID5 and RAID6 are also discussed in this article. RAIDZ-1 requires at least 2 disks (3 disks recommended), uses a single parity, and can sustain only 1 disk failure. RAIDZ-2 requires at least 3 disks (4 disks recommended), uses double parity, and can sustain 2 disks failure. RAIDZ-3 requires at least 4 disks (5 disks recommended), uses triple parity, and can sustain 3 disks failure.   Although you can create BTRFS RAIDs on Proxmox VE, at the time of this writing, BTRFS on Proxmox VE is still in technology preview. So, I don’t recommend using it in production systems. I will demonstrate ZFS RAID configuration on Proxmox VE in this article.   To create a ZFS RAID for Proxmox VE installation, select your desired ZFS RAID type from the Filesystem dropdown menu[1]. From the Disk Setup tab, select the disks that you want to use for the ZFS RAID using the Harddisk X dropdown menus[2]. If you don’t want to use a disk for the ZFS RAID, select – do not use – from the respective Harddisk X dropdown menu[3].   From the Advanced Options tab, you can configure different ZFS filesystem parameters. ashift[1]: You can set ZFS block size using this option. The block size is calculated using the formula 2ashift. The default ashift value is 12, which is 212 = 4096 = 4 KB block size. 4KB block size is good for SSDs. If you’re using a mechanical hard drive (HDD), you need to set ashift to 9 (29 = 512 bytes) as HDDs use 512 bytes block size. compress[2]: You can enable/disable ZFS compression from this dropdown menu. To enable compression, set compression to on. To disable compression, set compression to off. When compression is on, the default ZFS compression algorithm (lz4 at the time of this writing) is used. You can select other ZFS compression algorithms (i.e. lzjb, zle, gzip, zstd) as well if you have such preferences. checksum[3]: ZFS checksums are used to detect corrupted files so that they can be repaired. You can enable/disable ZFS checksum from this dropdown menu. To enable ZFS checksum, set checksum to on. To disable ZFS checksum, set checksum to off. When checksum is on, the fletcher4 algorithm is used for non-deduped (deduplication disabled) datasets and sha256 algorithm is used for deduped (deduplication enabled) datasets by default. copies[4]: You can set the number of redundant copies of the data you want to keep in your ZFS RAID. This is in addition to the RAID level redundancy and provides extra data protection. The default number of copy is 1 and you can store 3 copies of data at max in your ZFS RAID. This feature is also known as ditto blocks. ARC max size[5]: You can set the maximum amount of memory ZFS is allowed to use for the Adaptive Replacement Cache (ARC) from here. hdsize[6]: By default all the free disk space is used for the ZFS RAID. If you want to keep some portion of the disk space of each SSD free and use the rest for the ZFS RAID, type in the disk space you want to use (in GB) here. For example, if you have 40GB disks and you want to use 35GB of each disk for the ZFS RAID and keep 5GB of disk space free on each disks, you will need to type in 35GB in here. Once you’re done with the ZFS RAID configuration, click on OK[7].   Once you’re done with the ZFS storage configuration, click on Next to continue.   Type in the name of your country[1], select your time zone[2], select your keyboard layout[3], and click on Next[4].   Type in your Proxmox VE root password[1] and your email[2]. Once you’re done, click on Next[3].   If you have multiple network interfaces available on your server, select the one you want to use for accessing the Proxmox VE web management UI from the Management Interface dropdown menu[1]. If you have only a single network interface available on your server, it will be selected automatically. Type in the domain name that you want to use for Proxmox VE in the Hostname (FQDN) section[2]. Type in your desired IP information for the Proxmox VE server[3] and click on Next[4].   An overview of your Proxmox VE installation should be displayed. If everything looks good, click on Install to start the Proxmox VE installation. NOTE: If anything seems wrong or you want to change certain information, you can always click on Previous to go back and fix it. So, make sure to check everything before clicking on Install.   The Proxmox VE installation should start. It will take a while to complete.   Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.   On the next boot, you will see the Proxmox VE GRUB boot menu.   Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt. You will also see the access URL of the Proxmox VE web-based management UI.   Installing Proxmox VE 8 using Terminal UI In some hardware, the Proxmox VE graphical installer may not work. In that case, you can always use the Proxmox VE terminal installer. You will find the same options in the Proxmox VE terminal installer as in the graphical installer. So, you should not have any problems installing Proxmox VE on your server using the terminal installer. To use the Proxmox VE terminal installer, select Install Proxmox VE (Terminal UI) from the Proxmox VE GRUB boot menu and press <Enter>.   Select <I agree> and press <Enter>.   To install Proxmox VE on a single disk, select an HDD/SSD from the Target harddisk section, select <Next>, and press <Enter>.   For advanced disk configuration or ZFS/BTRFS RAID setup, select <Advanced options> and press <Enter>.   You will find the same disk configuration options as in the Proxmox VE graphical installer. I have already discussed all of them in the Proxmox VE Graphical UI installation section. Make sure to check it out for detailed information on all of those disk configuration options. Once you’ve configured the disk/disks for the Proxmox VE installation, select <Ok> and press <Enter>.   Once you’re done with advanced disk configuration for your Proxmox VE installation, select <Next> and press <Enter>.   Select your country, timezone, and keyboard layout. Once you’re done, select <Next> and press <Enter>.   Type in your Proxmox VE root password and email address. Once you’re done, select <Next> and press <Enter>.   Configure the management network interface for Proxmox VE, select <Next>, and press <Enter>.   An overview of your Proxmox VE installation should be displayed. If everything looks good, select <Install> and press <Enter> to start the Proxmox VE installation. NOTE: If anything seems wrong or you want to change certain information, you can always select <Previous> and press <Enter> to go back and fix it. So, make sure to check everything before installing Proxmox VE.   The Proxmox VE installation should start. It will take a while to complete.   Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.   Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt. You will also see the access URL of the Proxmox VE web-based management UI.   Accessing Proxmox VE 8 Management UI from a Web Browser To access the Proxmox VE web-based management UI from a web browser, you need a modern web browser (i.e. Google Chrome, Microsoft Edge, Mozilla Firefox, Opera, Apple Safari). Open a web browser of your choice and visit the Proxmox VE access URL (i.e. https://192.168.0.105:8006) from the web browser. By default, Proxmox VE uses a self-signed SSL certificate which your web browser will not trust. So, you will see a similar warning. To accept the Proxmox VE self-signed SSL certificate, click on Advanced.   Then, click on Accept the Risk and Continue.   You will see the Proxmox VE login prompt. Type in your Proxmox VE login username (root) and password[1] and click on Login[2].   You should be logged in to your Proxmox VE web-management UI. As you’re using the free version of Proxmox VE, you will see a No valid subscription warning message every time you log in to Proxmox VE. To ignore this warning and continue using Proxmox VE for free, just click on OK.   The No valid subscription warning should be gone. Proxmox VE is now ready to use.   Enabling Proxmox VE Community Package Repositories If you want to use Proxmox VE for free, after installing Proxmox VE on your server, one of the first things you want to do is disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories. This way, you can get access to the Proxmox VE package repositories for free and keep your Proxmox VE server up-to-date. To learn how to enable the Proxmox VE community package repositories, read this article.   Keeping Proxmox VE Up-to-date After installing Proxmox VE on your server, you should check if new updates are available for your Proxmox VE server. If new updates are available, you should install them as it will improve the performance, stability, and security of your Proxmox VE server. For more information on keeping your Proxmox VE server up-to-date, read this article.   Conclusion In this article, I have shown you how to install Proxmox VE on your server using the Graphical installer UI and the Terminal installer UI. The Proxmox VE Terminal installer UI installer is for systems that don’t support the Proxmox VE Graphical installer UI. So, if you’re having difficulty with the Proxmox VE Graphical installer UI, the Terminal installer UI will still work and save your day. I have also discussed and demonstrated different disk/storage configuration methods for Proxmox VE as well as configuring ZFS RAID for Proxmox VE and installing Proxmox VE on the ZFS RAID as well.   References RAIDZ Types Reference ZFS/Virtual disks – ArchWiki ZFS Tuning Recommendations | High Availability The copies Property Checksums and Their Use in ZFS — OpenZFS documentation ZFS ARC Parameters – Oracle Solaris Tunable Parameters Reference Manual
  7. Most of the operating system distributes their installer program in ISO image format. So, the most common way of installing an operating system on a Proxmox VE virtual machine is using an ISO image of that operating system. You can obtain the ISO image file of your favorite operating systems from their official website. To install your favorite operating system on a Proxmox VE virtual machine, the ISO image of that operating system must be available in a proper storage location on your Proxmox VE server. The Proxmox VE storage that supports ISO image files has a section ISO Images and has options for uploading and downloading ISO images.   In this article, I will show you how to upload an ISO image to your Proxmox VE server from your computer. I will show you how to download an ISO image directly on your Proxmox VE server using the download links or URL of that ISO image.   Table of Contents Uploading an ISO Image on Proxmox VE Server from Your Computer Downloading an ISO Image on Proxmox VE Server using URL Conclusion   Uploading an ISO Image on Proxmox VE Server from Your Computer To upload an ISO image on your Proxmox VE server from your computer, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Upload.   Click on Select File from the Upload window.   Select the ISO image file that you want to upload on your Proxmox VE server from the filesystem of your computer[1] and click on Open[2].   Once the ISO image file is selected, the ISO image file name will be displayed in the File name section. If you want, you can modify the ISO image file name which will be stored on your Proxmox VE server once it’s uploaded[1]. The size of the ISO image file will be displayed in the File size section[2]. Once you’re ready to upload the ISO image on your Proxmox VE server, click on Upload[3].   The ISO image file is being uploaded to the Proxmox VE server. It will take a few seconds to complete. If for some reason you want to stop the upload process, click on Abort.   Once the ISO image file is uploaded to your Proxmox VE server, you will see the following window. Just close it.   Shortly, the ISO image that you’ve uploaded to your Proxmox VE server should be listed in the ISO Images section of the selected Proxmox VE storage.   Downloading an ISO Image on Proxmox VE Server using URL To upload an ISO image on your Proxmox VE server using a URL or download link, visit the official website of the operating system that you want to download and copy the download link or URL of the ISO image from the website. For example, to download the ISO image of Debian 12, visit the official website of Debian from a web browser[1], right-click on Download, and click on Copy Link[2].   Then, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Download from URL.   Paste the download link or URL of the ISO image in the URL section and click on Query URL.   Proxmox VE should check the ISO file URL and obtain the necessary information like the File name[1] and File size[2] of the ISO image file. If you want to save the ISO image file in a different name on your Proxmox VE server, just type it in the File name section[1]. Once you’re ready, click on Download[3].   Proxmox VE should start downloading the ISO image file from the URL. It will take a while to complete.   Once the ISO image file is downloaded on your Proxmox VE server, you will see the following window. Just close it.   The downloaded ISO image file should be listed in the ISO Images section of the selected Proxmox VE storage.   Conclusion In this article, I have shown you how to upload an ISO image from your computer on the Proxmox VE server. I have also shown you how to download an ISO image using a URL directly on your Proxmox VE server.
  8. Keeping your Proxmox VE server up-to-date is important as newer updates come with bug fixes and improved security. If you’re using the Proxmox VE community version (the free version of Proxmox VE without an enterprise subscription), installing new updates will also add new features to your Proxmox VE server as they are released. In this article, I am going to show you how to check if new updates are available on your Proxmox VE server. If updates are available, I will also show you how to install the available updates on your Proxmox VE server.   Table of Contents Enabling the Proxmox VE Community Package Repositories Checking for Available Updates on Proxmox VE Installing Available Updates on Proxmox VE Conclusion   Enabling the Proxmox VE Community Package Repositories If you don’t have an enterprise subscription on your Proxmox VE server, you need to disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories to receive software updates on your Proxmox VE server. If you want to keep using Proxmox VE for free, make sure to enable the Proxmox VE community package repositories.   Checking for Available Updates on Proxmox VE To check if new updates are available on your Proxmox VE server, log in to your Proxmox VE web-management UI, navigate to the Updates section of your Proxmox VE server, and click on Refresh.   If you’re using the Proxmox VE community version (free version), you will see a No valid subscription warning. Click on OK to ignore the warning.   The Proxmox VE package database should be updated. Close the Task viewer window.   If newer updates are not available, then you will see the No updates available message after the Proxmox VE package database is updated.   If newer updates are available for your Proxmox VE server, you will see a list of packages that can be updated as shown in the screenshot below.   Installing Available Updates on Proxmox VE To install all the available updates on your Proxmox VE server, click on Upgrade.   A new NoVNC window should be displayed. Press Y and then press <Enter> to confirm the installation.   The Proxmox VE updates are being downloaded. It will take a while to complete.   The Proxmox VE updates are being installed. It will take a while to complete.   At this point, the Proxmox VE updates should be installed. Close the NoVNC window.   If you check for Proxmox VE updates, you should see the No updates available message. Your Proxmox VE server should be up-to-date[1]. After the updates are installed, it’s best to reboot your Proxmox VE server. To reboot your Proxmox VE server, click on Reboot[2].   Conclusion In this article, I have shown you how to check if new updates are available for your Proxmox VE server. If new updates are available, I have also shown you how to install the available updates on your Proxmox VE server. You should always keep your Proxmox VE server up-to-date so that you get the latest bug fixes and security updates.
  9. The full form of SR-IOV is Single Root I/O Virtualization. Some PCI/PCIE devices have multiple virtual functions and each of these virtual functions can be passed to a different virtual machine. SR-IOV is the technology that allows this type of PCI/PCIE passthrough. For example, an 8-port SR-IOV capable network card has 8 virtual functions, 1 for each port. 8 of these virtual functions or network ports can be passed to 8 different virtual machines (VMs). In this article, we will show you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   Table of Contents How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards Conclusion References   How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”. To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”. For both AMD and Intel systems, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”. To save the changes, press <F10>, select OK, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer. If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”. If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”. You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard. If you have an AMD processor, navigate to “PCI Configuration” and set “SR-IOV Support” to “Enabled”. If you have an Intel processor, navigate to “Chipset Configuration” and set “SR-IOV Support” to “Enabled”. To save the changes, press <F10>, select Yes, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the SR-IOV on your MSI motherboard, you have to enter the “Advanced Mode”. To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”. From the “Advanced Mode”, navigate to “Settings”. If you’re using an AMD processor, navigate to “Advanced” > “PCI Subsystem Settings” and set “SR-IOV Support” to “Enabled”. If you’re using an Intel processor, navigate to “Advanced” > “PCIe/PCI Sub-system Settings” and set “SR-IOV Support” to “Enabled”. NOTE: You may not find the “SR-IOV Support” option in the BIOS/UEFI firmware of your MSI motherboard. In that case, you can try updating the BIOS/UEFI firmware version and see if the option is available. To save the changes, press <F10>, select Yes, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”. To enable SR-IOV, you have to switch to “Advanced Mode”. If you’re in “Easy Mode”, press <F2> to switch to “Advanced Mode”. If you have an AMD processor, navigate to the “Settings” tab, navigate to “IO Ports”, and set “SR-IOV Support” to “Enabled”. If you have an Intel processor, navigate to the “Advanced” tab, navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”. NOTE: On newer Gigabyte motherboards (i.e. Z590, Z690, Z790), the SR-IOV option might be missing. In that case, try enabling Intel virtualization technology VT-x/VT-d and see if the SR-IOV option is displayed on the BIOS/UEFI firmware of your Gigabyte motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. SR-IOV should be enabled for your processor. For more information on enabling SR-IOV on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.   Conclusion We showed you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   References ASUS ROG Maximus Z690 Hero BIOS Overview ASUS ROG STRIX X570-E Gaming WIFI II BIOS Walk Thru ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF] ASRock Z690 Taichi BIOS Overview SR-IOV on MSI X470 Gaming Pro | MSI Global English Forum Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670] ASUS PRIME Z490-A BIOS Overview  
  10. The full form of IOMMU is Input Output Memory Management Unit. IOMMU maps the virtual addresses of a device to physical addresses which allows the device to be passed to a virtual machine (VM). VT-D does the same thing as IOMMU. The main difference is that IOMMU is developed by AMD while VT-D is developed by Intel. In this article, we will show you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   Table of Contents How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards Conclusion References   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”. To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”. If you have an AMD processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “AMD CBS”, and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASUS motherboard. If you have an Intel processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “System Agent (SA) Configuration”, set “VT-d” to “Enabled”, and set “Control Iommu Pre-boot Behavior” to “Enable IOMMU during boot” from the BIOS/UEFI Firmware of your ASUS motherboard. To save the changes, press <F10>, select OK, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer. If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”. If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”. You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard. If you have an AMD processor, navigate to “AMD CBS” > “NBIO Common Options” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard. If you have an Intel processor, navigate to “Chipset Configuration” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the IOMMU/VT-d on your MSI motherboard, you have to enter the “Advanced Mode”. To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”. Navigate to “OC settings”, scroll down to “CPU Features”, and press <Enter>. If you have an AMD processor, set “IOMMU” to “Enabled”. If you have an Intel processor, set “Intel VT-D Tech” to “Enabled”. To save the changes, press <F10>, select Yes, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”. To enable IOMMU/VT-d, you have to switch to the “Advanced Mode” of the BIOS/UEFI Firmware of your Gigabyte motherboard. If you’re in “Easy Mode”, you can press <F2> to switch to “Advanced Mode” on the BIOS/UEFI Firmware of your Gigabyte motherboard. Use the arrow keys to navigate to the “Settings” tab. If you have an AMD processor, navigate to “Miscellaneous” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard. If you have an Intel processor, navigate to “Miscellaneous” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. IOMMU/VT-d should be enabled for your processor. For more information on enabling IOMMU/VT-d on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.   Conclusion We showed you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   References ROG STRIX Z690 series BIOS Manual ( English Edition ) ASUS ROG Maximus Z690 Hero BIOS Overview ROG STRIX X670E Series BIOS Manual ( English Edition ) ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF] ASRock Z690 Taichi BIOS Overview MSI MEG Z690 ACE BIOS Overview Pomoc techniczna cz. 1 – Ustawianie optymalne biosu i OC w płycie głównej MSI B450 Gaming Plus Max Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670] GIGABYTE Z690 Aorus Elite DDR4 Motherboard BIOS Overview  
  11. Proxmox VE 8 is the latest version of the Proxmox Virtual Environment. Proxmox VE is an open-source enterprise Type-I virtualization and containerization platform. In this article, I am going to show you how to download the ISO image of Proxmox VE 8 and create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11 and Linux so that you can use it to install Proxmox VE 8 on your server and run virtual machines (VMs) and LXC containers.   Table of Contents Downloading the Proxmox VE 8 ISO Image Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11 Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux Conclusion   Downloading the Proxmox VE 8 ISO Image To download the ISO image of Proxmox VE 8, visit the official downloads page of Proxmox VE from your favorite web browser. Once the page loads, click on Download from the Proxmox VE ISO Installer section.   Your browser should start downloading the Proxmox VE 8 ISO image. It will take a while to complete.   At this point, the Proxmox VE 8 ISO image should be downloaded.   Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11 On Windows 10/11, you can use Rufus to create bootable USB thumb drives of different operating systems. To download Rufus, visit the official website of Rufus from your favorite web browser. Once the page loads, click on the Rufus download link as marked in the screenshot below.   Rufus should be downloaded.   Insert a USB thumb drive in your computer and double-click on the Rufus app file from the Downloads folder of your Windows 10/11 system to start Rufus.   Click on Yes.   Click on No.   Rufus should start. First, select your USB thumb drive from the Device dropdown menu[1]. Then, click on SELECT to select the Proxmox VE 8 ISO image[2].   Select the Proxmox VE 8 ISO image from the Downloads folder of your Windows 10/11 system using the file picker[1] and click on Open[2].   Click on OK.   Click on START.   Click on OK.   Click on OK. NOTE: The contents of the USB thumb drive will be removed. So, make sure to move important files before you click on OK.   The Proxmox VE 8 ISO image is being written to the USB thumb drive. It will take a while to complete.   Once the Proxmox VE ISO image is written to the USB thumb drive, click on CLOSE. Your USB thumb drive should be ready for installing Proxmox VE 8 on your server.   Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux On Linux, you can use the dd tool to create a bootable USB thumb drive of different operating systems from ISO image. First, insert a USB thumb drive in your computer and run the following command to find the device name of your USB thumb drive. $ sudo lsblk -e7   In my case, the device name of my 32GB USB thumb drive is sda as you can see in the screenshot below.   Navigate to the Downloads directory of your Linux system and you should find the Proxmox VE 8 ISO image there. $ cd ~/Downloads $ ls -lh   To write the Proxmox VE 8 ISO image proxmox-ve_8.1-2.iso to the USB thumb drive sda, run the following command: $ sudo dd if=proxmox-ve_8.1-2.iso of=/dev/sda bs=1M status=progress conv=noerror,sync NOTE: The contents of the USB thumb drive will be erased. So, make sure to move important files before you run the command above.   The Proxmox VE 8 ISO image is being written to the USB thumb drive sda. It will take a while to complete.   At this point, the Proxmox VE 8 ISO image should be written to the USB thumb drive.   To safely remove the USB thumb drive from your computer, run the following command: $ sudo eject /dev/sda   Your USB thumb drive should be ready for installing Proxmox VE 8 on any server.   Conclusion In this article, I have shown you how to download the ISO image of Proxmox VE 8. I have also shown you how download Rufus and use it to create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11. I have shown you how to create a bootable USB thumb drive of Proxmox VE 8 on Linux using the dd command as well.
  12. In a lab environment, lots of new users will be using JupyterHub. The default Authenticator of JupyterHub allows only the Linux system users to log in to JupyterHub. So, if you want to create a new JupyterHub user, you will have to create a new Linux user. Creating new Linux users manually might be a lot of hassle for you. Instead, you can configure JupyterHub to use FirstUseAuthenticator. FirstUseAuthenticator as the name says, automatically creates a new user while logging in to JupyterHub for the first time. After the user is created, the same username and password can be used to log in to JupyterHub. In this article, I am going to show you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I am also going to show you how to configure JupyterHub to use the FirstUseAuthenticator. NOTE: If you don’t have JupyterHub installed on your computer, you can read one of the articles depending on the Linux distribution you’re using: How to Install the Latest Version of JupyterHub on Ubuntu 22.04 LTS/ Debian 12/Linux Mint 21 How to Install the Latest Version of JupyterHub on Fedora 38+/RHEL 9/Rocky Linux 9   Table of Contents: Creating a Group for JupyterHub Users Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment Configuring JupyterHub FirstUseAuthenticator Restarting the JupyterHub Service Verifying if JupyterHub FirstUseAuthenticator is Working Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator Conclusion References   Creating a Group for JupyterHub Users: I want to keep all the new JupyterHub users in a Linux group jupyterhub-users for easier management. You can create a new Linux group jupyterhub-users with the following command: $ sudo groupadd jupyterhub-users   Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment: If you’ve followed my JupyterHub Installation Guide to install JupyterHub on your favorite Linux distributions (Debian-based and RPM-based), you can install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment with the following command: $ sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub-firstuseauthenticator   The JupyterHub FirstUseAuthenticator should be installed on the JupyterHub virtual environment.   Configuring JupyterHub FirstUseAuthenticator: To configure the JupyterHub FirstUseAuthenticator, open the JupyterHub configuration file jupyterhub_config.py with the nano text editor as follows: $ sudo nano /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py   Type in the following lines in the jupyterhub_config.py configuration file. # Configure FirstUseAuthenticator for Jupyter Hub from jupyterhub.auth import LocalAuthenticator from firstuseauthenticator import FirstUseAuthenticator LocalAuthenticator.create_system_users = True LocalAuthenticator.add_user_cmd = ['useradd', '--create-home', '--gid', 'jupyterhub_users' , '--shell', '/bin/bash'] FirstUseAuthenticator.dbm_path = '/opt/jupyterhub/etc/jupyterhub/passwords.dbm' FirstUseAuthenticator.create_users = True class LocalNativeAuthenticator(FirstUseAuthenticator, LocalAuthenticator): pass c.JupyterHub.authenticator_class = LocalNativeAuthenticator   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the jupyterhub_config.py file.   Restarting the JupyterHub Service: For the changes to take effect, restart the JupyterHub systemd service with the following command: $ sudo systemctl restart jupyterhub.service   If the JupyterHub configuration file has no errors, the JupyterHub systemd service should run just fine.   Verifying if JupyterHub FirstUseAuthenticator is Working: To verify whether the JupyterHub FirstUseAuthenticator is working, visit JupyterHub from your favorite web browser and try to log in as a random user with a short and easy password like 123, abc, etc. You should see the marked error message that the password is too short and the password should be at least 7 characters long. It means that the JupyterHub FirstUseAuthenticator is working just fine.   Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator: To create a new JupyterHub user using the FirstUseAuthenticator, visit the JupyterHub login page from a web browser, type in your desired login username and the password that you want to set for the new user, and click on Sign in.   A new JupyterHub user should be created and your desired password should be set for the new user. Once the new user is created, the newly created user should be logged into his/her JupyterHub account.   The next time you try to log in as the same user with a different password, you will see the error Invalid username or password. So, once a user is created using the FirstUseAuthenticator, only that user can log in with the same username and password combination. No one else can replace this user account.   Conclusion: In this article, I have shown you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I have also shown you how to configure JupyterHub to use the FirstUseAuthenticator.   References: firstuseauthenticator/firstuseauthenticator/firstuseauthenticator.py at main · jupyterhub/firstuseauthenticator · GitHub  
  13. MySQL is a reliable and widely used DBMS that utilizes SQL and a relational model to manage data. MySQL is installed as part of LAMP in Linux, but you can install it separately.Even in Ubuntu 24.04, installing MySQL is straightforward. This guide outlines the steps to follow. Read on! Step-By-Step Guide to Install MySQL on Ubuntu 24.04 If you have a user account on your Ubuntu 24.04 and have sudo privileges, installing MySQL requires you to follow the procedure below. Step 1: Update the System’s Repository When installing packages on Ubuntu, you should update the system’s repository to refresh the sources list. Doing so ensures the MySQL package you install is the latest stable version. $ sudo apt update Step 2: Install MySQL Server Once the package index updates, the next step is to install the MySQL server package using the below command. $ sudo apt install mysql-server After the installation, start the MySQL service on your Ubuntu 24.04. $ sudo systemctl start mysql.service Step 3: Configure MySQL Before we can start working with MySQL, we need to make a couple of configurations. First, access the MySQL shell using the command below. $ sudo mysql Once the shell opens up, set a password for your ’root’ using the below syntax. ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘your_password’; We’ve also specified to use the mysql_native_password authentication method. Exit the MySQL shell. exit; Step 4: Run the MySQL Script One interesting feature of MySQL is that it offers a script that you should run to quickly set it up. The script prompts you to specify different settings based on your preference. For example, you will be prompted to set a password for the root user. Go through each prompt and respond accordingly. $ sudo mysql_secure_installation Step 5: Modify the Authentication Method After successfully running the MySQL installation script, you should change the authentication method and set it to use the auth_socket plugin. Start by accessing your MySQL shell using the root account. $ mysql -u root -p Once logged in, run the below command to modify the authentication plugin. ALTER USER ‘root’@’localhost’ IDENTIFIED WITH auth_socket; Step 6: Create a MySQL User So far, we have only access to MySQL using the root account. We should create a new user and specify what privileges they should have. When creating a new user, you must add their username and the login password using the syntax below. create user ‘username’@’localhost’ IDENTIFIED BY ‘password’; Now that the user is created, we need to specify what privileges the user has when using MySQL. For instance, you can give them privileges, such as CREATE, ALTER, etc., on a specific or all the databases. Here’s an example where we’ve specified a few privileges to the added user on all available databases. Feel free to specify whichever privileges are ideal for your user. GRANT CREATE, ALTER, INSERT, UPDATE, SELECT on *.* TO ‘username’@’localhost’ WITH GRANT OPTION; For the new user and the privileges to apply, flush the privileges and exit MySQL. flush privileges; Step 7: Confirm the Created User As the last step, we should verify that our user can access the database and has the specified privileges. Start by checking the MySQL service to ensure it is running. $ sudo systemctl status mysql Next, access MySQL using the credentials of the user you added in the previous step. $ mysql -u username -p A successful login confirms that you’ve successfully installed MySQL, configured it, and added a new user. Conclusion MySQL is a relational DBMS widely used for various purposes. It supports SQL in managing data, and this post discusses all the steps you should follow to install it on Ubuntu 24.04. Hopefully, you’ve installed MySQL on your Ubuntu 24.04 with the help of the covered steps.
  14. Task Manager is an app on the Windows 10/11 operating system that is used to monitor the running apps and services of your Windows 10/11 operating system. The Task Manager app is also used for monitoring the CPU, memory, disk, network, GPU, and other hardware usage information. A few screenshots of the Windows Task Manager app are shown below: In this article, I am going to show you 6 different ways of opening the Task Manager app on Windows 10/11.   Table of Contents: Opening the Task Manager App from the Start Menu Opening the Task Manager App from the Windows Taskbar Opening the Task Manager App from Run Window Opening the Task Manager App from the Command Prompt/Terminal Opening the Task Manager App from the Windows Logon Menu Opening the Task Manager app Using the Keyboard Shortcut   1. Opening the Task Manager App from the Start Menu Search for the term app:task in the Start Menu and click on the Task Manager app from the search result as marked in the screenshot below. The Task Manager app should be opened.   2. Opening the Task Manager App from the Windows Taskbar Right-click (RMB) on an empty location of the Windows taskbar and click on Task Manager. The Task Manager app should be opened.   3. Opening the Task Manager App from Run Window To open the Run window, press <Windows> + Run. In the Run window, type in taskmgr in the Open section[1] and click on OK[2]. The Task Manager app should be opened.   4. Opening the Task Manager App from the Command Prompt/Terminal To open the Terminal app, right-click (RMB) on the Start Menu and click on Terminal.   The Terminal app should be opened. Type in the command taskmgr and press <Enter>. The Task Manager app should be opened.   5. Opening the Task Manager App from the Windows Logon Menu To open the Windows logon menu, press <Ctrl> + <Alt> + <Delete>. From the Windows logon menu, click on Task Manager. The Task Manager app should be opened.   6. Opening the Task Manager app Using the Keyboard Shortcut To Windows 10/11 Task Manager app can be opened with the keyboard shortcut <Ctrl> + <Shift> + <Escape>.   Conclusion: In this article, I have shown you how to open the Task Manager app on Windows 10/11 in 6 different ways. Feel free to use the method you like the best.
  15. Python and R programming languages rely on Anaconda as their package and environment manager. With Anaconda, you will get tons of the necessary packages for your data science, machine learning, or other computational tasks.To utilize Anaconda on Ubuntu 24.04, install the conda utility for your Python flavor. This post shares the steps for installing conda for Python 3, and we will install version 2024.2-1. Read on! How to Install conda n Ubuntu 24.04 Anaconda is an open-source platform and by installing conda, you will have access to it and use it for any scientific computational tasks, such as machine learning. The beauty of Anaconda lies in its numerous scientific packages, ensuring you can freely use it for your project needs. Installing conda on Ubuntu 24.04 follows a series of steps, which we’ve discussed in detail. Step 1: Downloading the Anaconda Installer When installing Anaconda, you should check and use the latest version of the installer script. You can access all the latest Anaconda3 installer scripts from the Anaconda Downloads Page. As of writing this post, we have version 2024.2-1 as the latest version, and we can go ahead and download it using curl. $ curl https://repo.anaconda.com/archive/Anaconda3-2024.2-1-Linux-x86_64.sh --output anaconda.sh Ensure you change the version when using the above command. Also, navigate to where you want the installer script to be saved. In the above command, we’ve specified to save the installer as anaconda.sh, but you can use any preferred name. The installer script is large and will take some time, depending on your network’s performance. Once the download is completed, verify the file is available using the ls command. Another crucial thing is to check the integrity of the installer script. To do so, we’ve used the SHA-256 checksum by running the below command. $ sha256sum anaconda.sh Once you get the output, confirm that it matches against the available Anaconda3 hashes from the website. Once everything checks out, you can proceed with the installation. Step 2: Run the conda Installer Script Anaconda has an installer script that will take you through installing it. To run the bash script, execute the below command. $ bash anaconda.sh The script will trigger different prompts that will walk you through the installation. For instance, you must press the Enter key to confirm that you are okay with the installation. Next, a document containing the lengthy Anaconda license agreement will open. Please go through it, and once you reach the bottom, type yes to confirm that you agree with the license terms. You must also specify where you want the installation to be installed. By default, the script selects a location in your home directory, which is okay in some instances. However, if you prefer a different location, specify it and press the enter key again to proceed with the process. Conda will start installing, and the process will take a few minutes. In the end, you will get prompted to initialize Anaconda3. If you wish to initialize it later, choose ‘no.’ Otherwise, type ‘yes,’ as in our case. That’s it! You will get an output thanking you for installing Anaconda3. This message confirms that the conda utility was installed successfully on Ubuntu 24.04, and you now have the green light to start using it. Step 3: Activate the Installation and Test Anaconda3 Start by sourcing the ~/.bashrc with the below command. $ source ~/.bashrc Next, restart your shell to open up in the Anaconda3 base environment. You can now check the installed conda version. $ conda --version Better yet, you can view all the available packages by listing them using the command below. $ conda list With that, you’ve installed Conda on Ubuntu 24.04. You can start working on your projects and maximize the power of Anaconda3 courtesy of its multiple packages. Conclusion Anaconda is installed by installing the conda command-line utility. To install conda, you must download its installer script, execute it, go through the installation prompts, and agree to the license terms. Once you complete the process, you can use Anaconda3 for your projects and leverage all the packages it offers.
  16. Now that you have Ubuntu 24.04 installed, the remaining task is ensuring that you install all the software you need, including Java. Installing Java on Ubuntu 24.04 makes it possible to develop and run Java applications, and as a Java programmer, you will inevitably install Java on Ubuntu.Java isn’t pre-installed on Ubuntu. As such, you must know what steps are required to quickly install Java before you start using it for your projects. Reading this post will arm you with a simple procedure to install Java on Ubuntu 24.04. Java JDK vs JRE When installing Java on Ubuntu 24.04, a common concern is understanding the difference between JDK and JRE and knowing which to install. Here’s the thing: Java Development Kit (JDK) comprises all the required tools to develop Java applications. It comprises of the Java compiler and debugger and for someone looking to create Java apps, you must have JDK installed. As for Java Runtime Environment(JRE), it is required for anyone looking to run Java applications on their system. So, if you only want to run Java applications without building them, you only need to install JRE and not the JDK. As a programmer, you will likely develop and run Java applications. Therefore, you must install JDK and JRE for everything to work correctly. How to Install Java on Ubuntu 24.04 Installing Java only requires access to an internet connection. Again, when you install the JDK, it should install the default JRE by default. However, that’s not always the case. Besides, if you want a specific version, you can specify it when running the install command. Here, we’ve provided the steps to follow to install Java quickly. Take a look! Step 1: Update Ubuntu’s Repository Updating the system repository ensures that the package you install is the latest stable version. The update command refreshes the sources list, and when you install Java, you will have the updated source index for the latest version. $ sudo update Step 2: Install Default JRE Before we can start installing Java, first verify that it isn’t already installed on your Ubuntu 24.04 by checking its version with the following command. $ java --version If Java is installed, you will get its version displayed on the output. Otherwise, you will get an output showing ’Java’ not found. Otherwise, install the default JRE using the below command. $ sudo apt install default-jre The installation time will depend on your network’s speed. Step 3: Install OpenJDK After successfully installing JRE, you are ready to install OpenJDK. Here, you can choose to install the default JDK, which will install the available version. Alternatively, you can opt to install a specific JDK version depending on your project requirements. For instance, if we want to install OpenJDK 17, we would execute our command as follows. $ sudo apt install openjdk-21-jdk During the installation process, you will get prompted to confirm a few things. Press ’y’ and hit the enter key to proceed with the installation. Once the installation is complete, you will have Java installed on your Ubuntu 24.04 and ready for use. The last task is to verify that Java is installed. By checking the version, you will get an output showing which version is installed. If you want a different version, ensure you specify it in the previous commands, as your project requirements could be different. $ java --version For our case, the output shows that we’ve installed Java v21.0.3 . Conclusion Installing Java on Ubuntu 24.04 isn’t a complicated process. However, you must know what your project requirements are to guide which version you install. To recap, installing Java requires you to first update the repository. Next, install JRE and then specify what OpenJDK version to install. You will have managed to install Java on Ubuntu 24.04, and this post shares more details on each step.
  17. The Node Package Manager (NPM) is a tool that allows developers to install and work with different JavaScript packages efficiently. Installing NPM involves installing Node.js, and this post shares all the insights you need to install NPM.Node.js is a suitable option for anyone looking to have a scalable backend that utilizes JavaScript. Node.js is built on Chrome’s V8 JS engine, and you can easily install it on your Ubuntu 24.04 to start powering your backend functionality in your projects. We will focus on understanding three options for installing NPM on Ubuntu 24.04. Method 1: Install NPM on Ubuntu 24.04 via APT You can find the NPM and Node.js from the Ubuntu repository. If you don’t need any specific Node.js version for your project, you can utilize this option to install NPM and Node.js with the below commands. First, run the update command. $ sudo apt update Next, source Node.js from the default repository and install it using the command below. $ sudo apt install nodejs At this point, you have Node.js installed, and you can verify the installed version using the command below. $ node -v To install NPM, run the following command. $ sudo apt install npm Verify that NPM is installed by checking its version. $ npm --version We have npm v9 for our case. You can now comfortably start working on your Node.js project, and with NPM installed, you have room to install any dependencies or packages. That’s the first option of installing NPM and Node.js on Ubuntu 24.04. Method 2: Install NPM Using NodeSource PPA When you install the NodeSource package, it will install NPM without you needing to install it separately. This method allows you to install a specific Nodejs package provided you’ve correctly added the PPA by downloading it using wget or curl. Start by visiting the Nodejs project to see which version you want to install. Once you decide on the version, retrieve it using curl as in the following command. For our example, we’ve retrieved version 20.x. $ curl -sL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh The script will get saved in your current directory, and you can verify it using the ls command. The next step is to run the script, but before that, you can open it with a text editor to confirm that it is safe to execute. You can then run the script using bash with the following command. $ sudo bash nodesocurce_setup.sh The command will add the NodeSource PPA to your local package, where you can source and install Node.js. When the script completes executing, you will get an output confirming the PPA has been added, and it will display the command you should use to install Node.js. Note that before installing the Node.js package, if you have already installed it using the previous method, it’s best to uninstall it to avoid running into an error. To do so, use the below command. $ sudo apt autoremove nodejs npm To install the Nodejs package, which will also install NPM, run the following command. $ sudo apt install nodejs Your system will source the package from the local package where we added the PPA. It will then proceed to install the NodeSource package version that you downloaded. Once the installation is complete, check its version using the following command. $ node -v The output will display the node version you downloaded, which is v20.12.2 for our case. Still, if we check the installed NPM version, you will notice it’s different from what we had earlier. $ npm --version The PPA installed NPM v10.5.0, which is higher than what we installed in method one earlier. Conclusion For anyone looking to use NPM and Node.js to power their backend application, this post shares two different methods for seamlessly installing NPM. This allows you to run your Node.js and install packages effectively. You can install NPM from the default Ubuntu 24.04 repository or add its PPA from the Node Source project, which will automatically install NPM alongside Node.js.
  18. As the name suggests, grep or global regular expression print lets you search for specific text patterns within a file’s contents. Its functionalities include pattern recognition, defining case sensitivity, searching multiple files, recursive search, and many more. So whether you’re a beginner or a system administrator, knowing about the grep command to locate the files efficiently is good. This tutorial will explain how to use grep in Linux and discuss its different applications. How To Use Grep Command in Linux The basic function of the grep command is to search for a particular text inside a file. You can do that by entering the following command: grep "text_to_search" file.txt Please replace ‘text_to_search’ with the text you want to search for and ‘file.txt’ with the target file. For example, to find the string “Hello” in the file named file.txt, we will use: grep "Hello" file.txt On entering the above command, grep will scan the Intro.txt file for “Hello.” As a result, it shows the output of the whole line or lines containing the target text. If the target file is on a path different from your current directory, please mention that path along with the file name. For instance: grep "Hello" ~/Documents/file.txt Here, the tildes ‘~’ mark represents your home directory. The above example shows how you can search for a piece of text in a single file. However, if you want to do the same search on multiple files at once, mention them subsequently in one grep command: grep "Hello" file.txt Linux_info.txt Password.txt In case you’re not sure about your string’s cases(uppercase or lowercase), perform a case-insensitive search by using the i option: grep -i "hello" Intro.txt Although the string we input was not the exact match, we received accurate results through the case-insensitive search. In case you want to invert the changes and check files that don’t contain the specific pattern, then please use the v option: grep -v "Hello" file.txt Linux_info.txt Password.txt Moreover, if you want to display the lines that start with a certain word, use the ‘^’ symbol. It serves as an anchor that specifies the beginning of the line. grep "^Hello" file.txt The above commands will only be useful when you know which file to search. In this case, you can recursively search the string inside the whole directory using the r option. For example, let’s search “Hello” inside the Documents directory: grep -r "Hello" ~/Documents Furthermore, you can also count the number of times the input string appears in a file through the c option: grep -c "Hello" Intro.txt Similarly, you can display the line numbers along with the matched lines with the n option: grep -n "Hello" Intro.txt A Quick Wrap-up Users often remember that a file used to contain a piece of text but forget the file name, which can land them in deep trouble. Hence, this tutorial was about using the grep command to search for text in a file’s contents. Furthermore, we have used different examples to demonstrate how you can tweak the grep command’s functioning with a few options. You can experiment by combining multiple options to find out what suits best according to your use case.
  19. Linux works well as a multiuser operating system. Many users can access a single OS simultaneously without interpreting each other. However, if others can access your directories or files, the risk may increase. Hence, from a security perspective, securing the data from others is essential. Linux has features to control access from permissions and ownership. The ownership of files, folders, or directories is categorized into three parts, which are: User (u): This is the default owner, also called the file’s creator. Group (g): It is the collection of multiple users with the same permissions to access folders or files. Other (o): Those users not in the above two categories belong to it. That’s why Linux offers simple ways to change file permissions without hassles. So in this quick blog, we have included all the possible methods to change file permissions in Linux. How to Change File Permissions in Linux In Linux, mainly Linux file permissions are divided into three parts, and these are: Read (r): In this category, users can only open and read the file and can’t make any changes to it. Write (w): Users can edit, delete, and modify the file content with written permission. Execute (x): When the user has this permission, they can execute the executable script and access the file details. Owner Representation Modify permission using the operator Permission symbols for symbolic mode Permission symbols for absolute mode User → u To add use ‘+’ Read → r To add or subtract read use ± 4 Group → g To subtract use ‘-‘ Write → w To add or subtract read use ± 2 Other → o To set use ‘=’ Execute → x To add or subtract read use ± 1 As you can see from the above table, there are two types of symbol representation of permission. You can use both of these modes (symbolic and absolute) to change file permissions using the chmod command. The chmod refers to the change mode that allows users to modify the access permission of files or folders. Using chmod Symbolic Mode In this method, we use the symbol (for owner- u, g, o; for permission- r, w, x) to add, subtract, or set the permissions using the following syntax: chmod <owner_symbol> mode <permission_symbol> <filename> Before changing the file permission, first, we need to find the current one. For this, we use the ‘ls’ command. ls -l Here the permission symbols belong to the following owner: ‘-‘ : shows the file type. ‘rw-‘ : shows the permission of the user (read and write) ‘rw-‘ : shows the permission of the group(read and write) ‘r- -‘ : shows the permission of others (read) In the above image, we highlighted one file in which the user has read and write permission, the group has read and write permission, and the other has only read permission. So here, we are going to add executable permission to others. For this, use the following command: chmod o+x os.txt As you can see, the execute permission has been added to the other category. Simultaneously, you can also change the multiple permissions of different owners. Following the above example, again, we change the permissions in it. So, here, we add executable permission from the user, remove write permission from the group, and add write permission to others. For this, we can run the below command: chmod -v u+x ,g-w,o+w os.txt Note: Use commas while separating owners, but do not leave space between them. Using chmod Absolute Mode Similarly, you can change the permission through absolute mode. In this method, mathematical operators (+, -, =) and numbers represent the permissions, as shown in the above table. For example, let’s take an example and the updated permission of the file data is as follows: Mathematical representation of the permission: User Read + Write Permission is represented as 665 4+2=6 Group Read + Write 4+2=6 Other Read + Execute 4+1=5 Now, we are going to remove read permission from the user and others, and the final calculation is: User Read + Write -Read (-4) Updated permission is represented as 261 4+2=6 6-4=2 Group Read + Write – 4+2=6 6 Other Read + Execute -Read (-4) 4+1=5 5-4=1 To update the permission, use the following chmod command: chmod -v 261 os.txt Change User Ownership of the File Apart from changing the file permission, you may also have a situation where you have to change the file ownership. For this, the chown is used which represents the change owner. The file details represent the following details: <filetype> <file_permission> <user_name> <group_name> <file_name> So, in the above example, the owner’s or user name is ‘prateek’, and you can change the user name that only exists on your system. Before changing the username, first list all the users using the following command: cat /etc/passwd Or awk -F ':' '{print $1}' /etc/passwd Now, you can change the username of your current or new file between these names. The general syntax to change the file owner is as follows: sudo chown <new_username> <filename> Note: Sudo permission is required in some cases. Based on the above result, we want to change the username from ‘prateek’ to ‘proxy.’ To do this, we run the below command in the terminal: sudo chown proxy os.txt Change Group Ownership of the File First, list all the groups that are present in your system using the following command: cat /etc/group | cut -d: f1 The ‘chgrp’ command (change group) changes the filegroup. Here, we change the group name from ‘prateek’ to ‘disk’ using the following command: sudo chgrp disk os.txt Conclusion Managing file permissions is essential for access control and data security. In this guide, we focused on changing the file permissions in Linux. It has a feature through which you can control ownership (user, group, others) and permissions (read, write, execute). Users can add, subtract, or set the permissions according to their needs. Users can easily modify the file permissions through the chmod command using the symbolic and absolute methods.
  20. Operating systems use packets for transferring the data on a network. These are small chunks of information that carry data and travel among devices. Moreover, when any network problem arises, packets aid in identifying the root cause of the underlying problem. How? By tracing the route of those packets. The traceroute command in Linux helps you map the path packets take while traveling to a specific destination. This further helps you troubleshoot network latency, packet loss, network hops, DNS resolution issues, slow website access, and more. So, in this blog, we will explain simple ways to use the traceroute command in Linux. How To Use Traceroute Command in Linux Firstly, the traceroute does not come preinstalled in many Linux distributions. However, you can install it by executing one of the below command according to your system: Operating System Command Debian/Ubuntu sudo apt install traceroute Fedora sudo dnf install traceroute Arch Linux sudo pacman -Sy traceroute openSUSE sudo zypper install traceroute After installation, you can implement the traceroute command by entering: traceroute <destination_IP> Replace <destination_IP> with the device’s IP address at the destination. Once you run the command, your system will display the list of hops with the IP address and response time. Hops are the devices that your packets go through while traveling to a specific destination. For example, let’s use the traceroute command for Google’s IP address: traceroute 8.8.8.8 The result shows only one hop while marking others as an asterisk(*). This happens because the subsequent hops did not respond within the timeout period of 3 seconds. Moreover, the traceroute command, by default, uses DNS resolution to get the hostnames of hops, which slows down the process. You can omit that part and guide it to display only the IP addresses by using the -n option: traceroute -n <destination_IP> If you want to limit the number of hops, use the -m option along with the traceroute command: traceroute -m N <destination_IP> Here, put the desired number of hops in place of N. On execution, it will return only N number of hops in the results. The traceroute command only displays every hop’s round-trip time(RTT). However, you can get more detailed timing information with the -I option: traceroute -I <destination_IP> This command sends an ICMP echo request to retrieve more accurate RTT data. For instance, retake the example of Google: Tip: If your specified destination restricts ICMP packets, you can instead trace the UDP packets by employing the -U option: traceroute -U <destination_IP> In case you want to explore more options for traceroute, then please run the below command: traceroute --help A Quick Wrap-up Traceroute is an amazing CLI utility that you can use to diagnose network-related issues in Linux. It traces the path of packets to identify all the critical issues of the network. Hence, We have explained every single detail about the traceroute command with the help of some examples.
  21. The htop is a CLI utility to check an interactive list of running processes in real-time. It is a more feature-rich and user-friendly alternative to the top command. The htop command allows you to manage system processes, monitor resources, and perform other administration tasks. One of the most prominent features of htop is that it shows color-coded processes, which helps you differentiate them based on resource usage. Furthermore, it lets you customize the results with its sort and filter options. So, this short tutorial is about how to use the htop command in Linux without hassles. Unlike top, the htop command is not preinstalled in most Linux systems. That’s why you must install it using the following commands: Operating System Command Debian/Ubuntu sudo apt-get install htop Fedora sudo dnf install htop RHEL/CentOS sudo yum install htop Now, you can use the htop command, so let’s start with the basics: htop When you execute the above command, it launches the htop utility. Here, you can use the arrow keys to navigate up and down the processes. Moreover, press ‘F1’ or ‘?’ to get the help screen for additional navigation shortcuts. Sort Processes in htop In htop, you can sort the processes by CPU, memory, and other usage. Open the sorting menu by pressing F6: For example, select the PERCENT_CPU option and press ‘Enter.’ As you can see in the above image, all the processes are now sorted by CPU consumption. Search and Filter Processes in htop To search any process in htop, please go through the following steps: Press ‘F3’ to open the search bar. Similarly, press ‘F4’ to filter out the processes. Additional Options with htop -d, –delay=[argument]: By default, htop updates the processes every second, but you can add a delay using this option. For instance, to introduce a delay of 10 seconds, we would enter ‘–delay=10.’ -C, –no-color: This option disables the color output, which is helpful in systems with limited terminal support for colors. -u, –user=[username]: To display the processes for a specific user. Just replace ‘[username]’ with the target user’s name. -p, –pid=[PID1,PID2]: Displays information for specified process IDs. For example, let’s check the details of PID 1: htop -p 1 -v, –version: Prints htop version information. -h, –help: This displays a help message with usage information. Kill a Process in htop If you want to kill any process, select it and press the ‘F9’ key or ‘k’ to transmit a kill signal for the selected process. Wrapping Up Htop is a powerful utility for interactively checking system processes in real time. This tutorial briefly discusses how to use the htop command. As htop is not a preinstalled utility in Linux distributions, your first step is to install it using the mentioned commands. Later, we explained how to sort, search, filter, and kill processes from the htop utility.
  22. All UNIX-based operating systems, including Linux, follow the structure that “everything is a file.” These systems treat all the regular files, directories, processes, symbolic links, and devices like external hardware as files. You can create, modify, and delete files using the commands or from the File Manager. Deleting files is essential when you accidentally create multiple files that become unnecessary for the system. So, in this quick blog, we will explain quick ways to delete a file in Linux with no trouble. There are a few methods of deleting the files, so let’s look at them individually with the correct examples. The rm Command You can use the rm command to delete the file from the terminal. For example, you want to delete the “filename.txt” located in the Downloads directory, so first run the below command to open the directory in the terminal: cd ~/Downloads   Then, use the following command: rm filename.txt The rm command doesn’t display any output, but you can use the -v option to get the output: rm -v filename.txt If you want to delete multiple files from the current directory, you can mention all those files in a single rm command. For example, to delete three files– file1.txt, file2.txt, file3.txt, please run the below command: rm file1.txt file2.txt file3.txt In case you want to delete all the files with the same extension, then you can run the following command: rm *.txt As the above image shows, we have deleted all the .txt files from the Downloads directory. Moreover, you can use multiple extensions in a single command to delete different types of files simultaneously. For example, let’s delete all the files having the .txt and the .sh extensions: rm -v *.sh *.txt Similarly, you can empty a directory by only adding the * in the rm command: rm * Remember, the above command deletes all files except the directories. Hence, if there is a subdirectory, then the terminal will show the following output: However, you can use the -r option with the rm command to delete the subdirectories. The -r option recursively deletes the directory along with its contents: rm -r * In case you want to get the confirmation before deleting the file, please use the -i option. rm -i * Once you run the command, the system will show a confirmation prompt, so all you have to do is press Y to delete or N to decline it. From the File Manager We recommend deleting the file from the File Manager if you are a Linux beginner. So first open the File Manager and locate the directory: Now select the file and right-click it to get the context menu. Finally, click on the Move to Trash option or press Delete button. A Quick Wrap-up Linux has various commands and methods to delete a file quickly. However, users must know how to delete files to maintain an organized system and minimal storage consumption. This quick tutorial explained two ways of doing so. Initially, we discussed how the rm command works, then explained briefly the step-by-step process of deleting files using the GUI.
  23. The Logrotate utility simplifies the process of administering log files. It relocates and replaces log files to manage their size and organize them while maintaining the information present inside them. For example, it will maintain seven log files to keep daily records for seven days. While rotating the log files, Logrotate deletes irrelevant old logs, preventing them from consuming excessive disk space. It runs periodically in the background to keep your systems organized and clean. So, if you want to learn about Logrotate, this blog is for you. Here, we have included in-depth information about how to set Logrotate on Linux. How To Set Logrotate on Linux Although many Linux distributions have Logrotate as the pre-installed utility. However, if your system does not have Logrotate, please use the following command to install it: sudo apt install logrotate Now, let’s move to the configuration part. There are two kinds of logrotate configurations– global and system-specific. Open the ‘/etc/logrotate.conf’ file using a text editor. It is Logrotate’s primary configuration file, and any changes made to it will affect the whole system. sudo nano /etc/logrotate.conf This file has three key sections: To specify the rotation frequency, i.e., the time it should rotate the logs. It is set to weekly by default, but you can change it to daily, weekly, or monthly. To define the number of rotated files it should keep, adjust the value based on how much historical data you want to retain. For instance, ‘rotate 4’ guides it to keep the latest four rotated log files and delete the earlier ones to free up disk space. The third is to specify the permissions and ownership of the new log files it’ll create. You can tweak these settings according to what suits your system best. For instance, to maintain weekly records for one month(28 days), you must enter: weekly rotate 4 create 0644 root root This way, it will rotate one file weekly and keep four such files. Further, it creates a new log file for currently occurring events while giving the root user and group the read-and-write permissions and read-only for others. If you have to monitor a specific application’s logs for underlying issues. In that case, you can tailor log rotation settings for that application by creating its separate logrotate configuration file. Let’s take an example of conda. First, create its file using: sudo nano /etc/logrotate.d/conda In this file, add configurations specific to the conda logs: /var/log/conda/*.log { weekly rotate 4 compress delaycompress missingok notifempty create 0644 root root } Here, the compress command guides to compress the files so that resulting files take up less space. With the delaycompress command, you can hold the latest rotated file uncompressed to make it convenient for the users to refer to it. The missingok option tells logrotate to ignore the absence of a log file and continue its operations without any error. At last, with notifempty, logrotate won’t rotate any empty log file. The logrotate should run automatically as per the default settings. However, you must confirm it using: nano /etc/cron.daily/logrotate A Quick Wrap-up Knowing the configuration process of the logrotate utility is crucial for system administrators and is also essential for disk management in Linux devices. Hence, this blog explains the approaches used to set logrotate on Linux. You can modify configurations globally and simultaneously change them for specific applications. Moreover, system-specific configurations should be used responsibly because they always override global settings.
  24. Cron is a time-based job scheduler that lets you schedule tasks and run scripts periodically at a fixed time, date, or interval. Moreover, these tasks are called cron jobs. With cron jobs, you can efficiently perform repetitive tasks like clearing cache, synchronizing data, system backup and maintenance, etc. These cron jobs also have other features like command automation, which can significantly reduce the chances of human errors. However, many Linux users face multiple issues while setting up a cron job. So, this article provides examples of how to set up a cron job in Linux. How To Set up a Cron JobFirstly, you must know about the crontab file to set up a cron job in Linux. You can access this file to view information about existing cron jobs and edit it to introduce new ones. Before directly opening the crontab file, use the below command to check that your system has the cron utility: sudo apt list cron If it does not provide an output as shown in the given image, install cron using: sudo apt-get install cron -y Now, verify that the cron service is active by using the command as follows: service cron status Once you are done, edit the crontab to start a new cron job: crontab -e The system will ask you to select a particular text editor. For example, we use the nano editor by entering ‘1’ as input. However, you can choose any of the editors because the factor affecting a cron job is its format, which we’ll explain in the next steps. After choosing an editor, the crontab file will open in a new window with basic instructions displayed at the top. Finally, append the following crontab expression in the file: * * * * * /path/script Here, each respective asterisk (*) indicates minutes, hours, daily, weekly, and monthly. This defines every aspect of time so that the cron job can execute smoothly at the scheduled time. Moreover, replace the terms path and script with the path containing the target script and the script’s name, respectively. Time Format to Schedule Cron JobsAs the time format discussed in the above command can be confusing, let’s discuss its format in brief: In the Minutes field, you can enter values in the range 0-59, where 0 and 59 represent the minutes visible on a clock. For an input number, like 9, the job will run at the 9th minute every hour. For Hours, you can input values ranging from 0 to 23. For instance, the value for 2 PM would be ’14.’ The Day of the Month can be anywhere between 1 and 31, where 1 and 31 again indicate the first and last Day of the Month. For value 17, the cron job will run on the 17th Day of every Month. In place of Month, you can enter the range 1 to 12, where 1 means January and 12 means December. The task will be executed only during the Month you specify here. Note: The value ‘*’ means every acceptable value. For example, if ‘*’ is used in place of the minutes’ field, the task will run every minute of the specified hour. For example, below is the expression to schedule a cron job for 9:30 AM every Tuesday: 30 9 * * 2 /path/script For example, to set up a cron job for 5 PM on weekends in April: 0 17 * 4 0,6-7 /path/script As the above command demonstrates, you can use a comma and a dash to provide multiple values in a field. So, the upcoming section will explain the use of various operators in a crontab expression. Arithmetic Operators for Cron JobsRegardless of your experience in Linux, you’ll often need to automate jobs to run twice a year, thrice a month, and more. In this case, you can use operators to modify a single cron job to run at different times. Dash (-): You can specify a range of values using a dash. For instance, to set up a cron job from 12 AM to 12 PM, you can enter * 0-12 * * * /path/script. Forward Slash (/): A slash helps you divide a field’s acceptable values into multiple values. For example, to make a cron job run quarterly, you’ll enter * * * /3 * /path/script. Comma (,): A comma separates two different values in a single input field. For example, the cron expression for a task to be executed on Mondays and Wednesdays is * * * * 1,3 /path/script. Asterisk (*): As discussed above, the asterisk represents all values the input field accepts. It means an asterisk in place of the Month’s field will schedule a cron job for every Month. Commands to Manage a Cron JobManaging the cron jobs is also an essential aspect. Hence, here are a few commands you can use to list, edit, and delete a cron job: The l option is used to display the list of cron jobs. The r option removes all cron jobs. The e option edits the crontab file. All the users of your system get their separate crontab files. However, you can also perform the above operations on their files by adding their username between the commands– crontab -u username [options]. A Quick Wrap-upExecuting repetitive tasks is a time-intensive process that reduces your efficiency as an administrator. Cron jobs let you automate tasks like running a script or commands at a specific time, reducing redundant workload. Hence, this article comprehensively explains how to create a cron job in Linux. Furthermore, we briefed the proper usage of the time format and the arithmetic operators using appropriate examples.
  25. Processes are the running instances of programs that consume system resources. Listing these processes helps you monitor system activity, and troubleshoot issues. That’s why there are multiple tools and utilities in Linux that you can use to list the currently running process. However, many beginners don’t know the exact way to list the process without errors. So, in this short article, we will explain different methods to list the process in Linux. We have divided this section into multiple parts to give you the best commands to list the processes in Linux. The ps Command The ps, or “process status,” is the most common utility to list processes in the terminal: ps -e The -e option guides ps to show every process regardless of whether the user owns those processes. Furthermore, you can customize the ps command to produce additional details using the “aux” options: ps aux The top Command If you desire to view the real-time list of system processes, please use the top command. It continuously updates the process list according to new and completed processes, providing more accurate results: top The above command on execution shows the list of processes as per their CPU consumption. Moreover, You can not interact with the terminal until you press “q” to quit the top utility. The pstree Command The pstree is very different from the above two commands because it displays the hierarchical relationship of processes in a tree-like structure. It helps you visually understand how a process starts and its connection with other active processes. pstree The Glances Tool The Glances tool provides a brief overview of the currently running process. However, you have to install the tool by running the below command: Operating System Command Debian/Ubuntu sudo apt install glances Fedora sudo dnf install glances Arch Linux sudo pacman -Sy glances openSUSE sudo zypper install glances After the successful installation, you can open the Glances by running the following command: glances A Quick Summary Knowing how to list processes can help free up the space and turn off the currently running process. This article covered four ways– the top, ps, pstree, and pgrep commands. You can choose to use any of them according to what suits you best. We recommend you use any commands carefully, or you may get errors.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.