Jump to content

Blogger

Blog Bot
  • Joined

  • Last visited

    Never

Everything posted by Blogger

  1. Blogger posted a blog entry in Programmer's Corner
    by: aiparabellum.com Thu, 05 Dec 2024 04:40:38 +0000 NovelAI stands out as a revolutionary tool in the realm of digital storytelling, combining the power of advanced artificial intelligence with the creative impulses of its users. This platform is not just a simple writing assistant; it is an expansive environment where stories come to life through text and images. NovelAI offers unique features that cater to both seasoned writers and those who are just beginning to explore the art of storytelling. With its promise of no censorship and the freedom to explore any narrative, NovelAI invites you to delve into the world of creative possibilities. Features of NovelAI NovelAI provides a host of exciting features designed to enhance the storytelling experience: AI-Powered Storytelling: Utilize cutting-edge AI to craft stories with depth, maintaining your personal style and perspective. Image Generation: Bring characters and scenes to life with powerful image models, including the leading Anime Art AI. Customizable Editor: Tailor the writing space to your preferences with adjustable fonts, sizes, and color schemes. Text Adventure Module: For those who prefer structured gameplay, this feature adds an interactive dimension to your storytelling. Secure Writing: Ensures that all your stories are encrypted and private. AI Modules: Choose from various themes or emulate famous authors like Arthur Conan Doyle and H.P. Lovecraft. Lorebook: A feature to keep track of your world’s details and ensure consistency in your narratives. Multi-Device Accessibility: Continue your writing seamlessly on any device, anywhere. How It Works Using NovelAI is straightforward and user-friendly: Sign Up for Free: Start by signing up for a free trial to explore the basic features. Select a Subscription Plan: Choose from various subscription plans to unlock more features and capabilities. Customize Your Experience: Set up your editor and select preferred AI modules to tailor the AI to your writing style. Start Writing: Input your story ideas and let the AI expand upon them, or use the Text Adventure Module for a guided narrative. Visualize and Expand: Use the Image Generation feature to visualize scenes and characters. Save and Secure: All your work is automatically saved and encrypted for your eyes only. Benefits of NovelAI The benefits of using NovelAI are numerous, making it a versatile tool for any writer: Enhanced Creativity: Overcome writer’s block with AI-driven suggestions and scenarios. Customization: Fully customizable writing environment and AI behavior. Privacy and Security: Complete encryption of stories ensures privacy. Flexibility: Write anytime, anywhere, on any device. Interactive Storytelling: Engage with your story actively through the Text Adventure Module. Diverse Literary Styles: Experiment with different writing styles and genres. Visual Storytelling: Complement your narratives with high-quality images. Pricing NovelAI offers several pricing tiers to suit various needs and budgets: Paper (Free Trial): Includes 100 free text generations, 6144 tokens of memory, and basic features. Tablet ($10/month): Unlimited text generations, 3072 tokens of memory, and includes image generation and advanced AI TTS voices. Scroll ($15/month): Offers all Tablet features plus double the memory and monthly Anlas for custom AI training. Opus ($25/month): The most comprehensive plan with 8192 tokens of memory, unlimited image generations, and access to experimental features. NovelAI Review Users have praised NovelAI for its versatility and user-friendly interface. It’s been described as a “swiss army knife” for writers, providing tools that spark creativity and make writing more engaging. The ability to tailor the AI and the addition of a secure, customizable writing space are highlighted as particularly valuable features. Moreover, the advanced image generation offers a quick and effective way to visualize elements of the stories being created. Conclusion NovelAI redefines the landscape of digital storytelling by blending innovative AI technology with user-driven customization. Whether you’re a hobbyist looking to dabble in new forms of writing or a professional writer seeking a versatile assistant, NovelAI offers the tools and freedom necessary to explore the vast expanse of your imagination. With its flexible pricing plans and robust features, NovelAI is well worth considering for anyone passionate about writing and storytelling. The post NovelAI appeared first on AI Parabellum.
  2. by: Women in Technology As a woman navigating the world of tech and subsequently leadership, you’re likely all too familiar with the unique challenges that come with the territory. Whether it’s battling imposter syndrome or finding your voice in rooms where you might be the only woman, the journey can sometimes feel overwhelming. One thing I’ve learned through my own experience is that you don’t have to go it alone. In fact, mentorship has been one of the most important elements in my own growth, and it continues to shape how I approach my career. But here’s the thing: mentorship isn’t just about having one person by your side throughout your entire career. Your needs change as you grow, and the mentors who help you early on might not be the same ones who guide you when you’re at a senior leadership level. The beauty of mentorship lies in its fluidity, allowing you to seek out different people at different stages of your career to help tackle the challenges you’re facing in that moment. When I think back to some of the biggest hurdles I’ve faced as a woman in tech leadership, it’s clear that they were not just about technical competence. Sure, mastering technology skills was critical early on, but as I grew into leadership roles, the challenges became more nuanced. There was the pressure to prove myself in a field where women are still underrepresented, the occasional frustration of having my ideas dismissed in meetings, and the delicate balance between being empathetic and authoritative—a balance that women often feel they must manage more carefully than men. You may have felt the same way—wondering how to assert yourself without being labeled as “too aggressive,” or finding that work-life balance is an ongoing struggle, especially if you’re juggling family responsibilities alongside the demands of your role. These challenges are real, and they can sometimes make you question whether you belong in the room at all. But you do. And this is where mentorship becomes so important. In the early stages of my career, I sought out mentors who could help me sharpen my technical skills and build confidence. One of my first mentors was my manager at Freddie Mac—Angie Enciso. Angie was assertive, a thorough technologist and data engineer. The larger the problem, the calmer Angie became. I approached her expressing my desire to learn from her style and being transparent of how nervous production support calls would make me as a brand-new NOC Sybase DBA. Angie taught me how to handle the pressure of tight deadlines while still delivering high-quality work. I leaned on her guidance as I found my footing in a complex field. Then there was female leader in my tenure with Fannie Mae who taught me how to operate in male-dominated executive spaces, providing insights I wouldn’t have been able to see from my own perspective. She wasn’t just a strategic advisor—she helped me understand the unwritten rules of networking and how to ensure that my voice was heard even when I felt overlooked. Later on, as I transitioned into leadership, the nature of my mentorship relationships changed. When I joined Capital One, I did not have the experience of managing large teams. I had been a people leader before, but nothing could prepare me for the scale I was required to operate at within Capital One. I found a great mentor in my leader Raghu Valluri who helped me see the bigger picture—how to lead teams, navigate corporate politics, and make decisions that had a broad impact. He was instrumental in helping me develop a leadership style that was true to myself, even when the pressure was to conform to traditional, sometimes rigid, leadership molds. Through that mentor-mentee relationship, I found my footing and effective ways to lead my team through multi-million-dollar initiatives which had significant revenue and partnership impacts for the larger organization. Very recently, I transitioned back to federal contracting and was contemplating establishing my venture in the field. I leaned on mentorship again and approached Gautam Ijoor, the CEO of Alpha Omega and unashamedly asked for the opportunity to establish a mentor-mentee relationship. Gautam was kind, made time for me from his extremely busy schedule and graciously guided me through a process which helped me realize the very goals I was intending to walk towards. It was through those conversations and eventual contemplation that I realized how I can effectively navigate the next steps in my career journey. These experiences taught me that mentorship is not about sticking with one person for the long haul. Instead, it is about finding the right people who can help you with specific challenges as they arise. The mentor who guides you through technical growth may not be the same one who helps you navigate the boardroom. And that is ok. One thing I’ve come to believe strongly is the importance of having diverse mentors. Just as you need a variety of skills to succeed in leadership, you also need different perspectives to tackle the challenges that come your way. Whether it’s a mentor who’s walked in your shoes as a woman in tech or someone who offers a completely different viewpoint, having a range of voices to turn to is invaluable. For women in tech leadership having both male and female mentors can offer a well-rounded perspective. Female mentors can share their experiences of navigating the same biases and barriers you might be facing. They can offer practical advice on how to make your voice heard, how to lead authentically, and how to manage the constant balancing act of work and life. Meanwhile, male mentors can help you understand the dynamics of male-dominated spaces, giving you insights into how to succeed without losing your sense of self. Mentoring Others: Paying it Forward As I’ve progressed in my own career, one of the things that brings me the most satisfaction is mentoring others. There’s something incredibly rewarding about helping someone else see their potential and guiding them through the same obstacles I once faced. I’ve mentored people at various stages of their careers, and one thing I always emphasize is that you don’t have to do it all alone. If there’s one piece of advice I can offer from my own experience, it’s this: don’t be afraid to seek out mentorship throughout your entire career. You don’t need to have all the answers, and you certainly don’t have to figure everything out on your own. By finding mentors who understand your challenges—whether it’s mastering technical skills, building leadership confidence, or navigating the complexities of work-life balance—you can grow in ways you never thought possible. And as you grow, remember to pay it forward. Mentoring others isn’t just about giving back; it’s about continuing the cycle of growth, empowerment, and inclusion in an industry that needs more diverse voices. Together, we can create a tech leadership landscape where more women thrive—and where mentorship plays a pivotal role in making that possible. If you’re looking for direction and knowledge for career advancement and success, or have insight to pass on to professional women, learn more about the WIT Mentor-Protégé program here: https://www.womenintechnology.org/mentor-protege-program Reha Malik is Vice President of Data and ML tech at Alpha Omega, Technology Executive, Graduate teaching faculty at George Mason University and WIT Member
  3. by: Girls Who Code Tue, 29 Oct 2024 16:19:25 GMT As we wrap up October’s spooky season, let’s remember: the only things that should be creeping up on you are witches and vampires, not cyber threats lurking in the shadows! As many of you know, October is also Cybersecurity Awareness Month, which makes sense, because what could be scarier than having your personal information spread without your permission? At Girls Who Code, we’ve spent the last few weeks providing our students with resources, tools, and tricks to keep themselves safe online. But, we’re also committed to helping our community build a secure world all year long. Because cybersecurity is about more than making sure they have the strongest password possible (though, that’s extremely important, too). It’s also about making sure they have all the protection and knowledge they need to keep malicious actors from slithering into their digital world. Let’s be honest, all our lives are becoming more and more online. By the time our students reach high school, they’re using the internet for homework, for research, and for communicating with teachers and classmates. Hundreds of seemingly basic tasks are automated through apps, and social media has made students visible to millions of people around the world. While this has made the lives of so many young people easier, more exciting, and more expansive, it’s also made them vulnerable in ways we may not even realize. That’s why we were so excited to work withThe Achievery, created by AT&T, to roll out some essential cybersecurity Learning Units for 9th-10th grade students. In today’s tech-driven environment, understanding cybersecurity isn’t just a nice-to-have — it’s essential. Our students are diving into practical tips, like keeping software up to date and spotting phishing emails, while also learning the importance of visiting secure websites (you know, those with https:// instead of http://). We also want them to feel empowered to share this knowledge within their communities. Plus, they get useful checklists for adjusting browser settings on their devices. With units like “Online Privacy,” “Defend Against Malware and Viruses!,” and “DNS (Domain Name System) Uncovered,” we’re not just teaching them about cybersecurity; we’re helping them build a safer online future for themselves and others. We encourage our community to check out these, and so many other free and accessible tools, on The Achievery, which works to make digital learning more entertaining, engaging, and inspiring for K-12 students everywhere. As Cybersecurity Awareness Month wraps up, let’s keep empowering our students to embrace the internet’s benefits while confidently navigating its challenges. All young people deserve to protect themselves while enjoying a safer online experience that inspires them to thrive in the digital world.
  4. by: Zainab Sutarwala Tue, 15 Oct 2024 17:25:10 +0000 Malware or malicious software brings significant threats to both individuals and organisations. It is important to understand why malware is critical for software developers and security professionals, as it helps to protect systems, safeguard sensitive information, and maintain effective operations. In this blog, we will provide detailed insights into malware, its impacts and other prevention strategies. Stay with us till the end. What is Malware? Malware refers to software designed intentionally to cause damage to the computer, server, computer network or client. The term includes a range of harmful software types including worms, viruses, Trojan horses, spyware, ransomware, and adware. Common Types of Malware Malware comes in different types and has the following unique features and characteristics: Viruses: A code that attaches itself for cleaning files and infects them, thus spreading to other files and systems. Worms: Malware that replicates and spreads to another computer system, and affects network vulnerabilities. Trojan Horses: Malicious and dangerous code disguised as legal software, often tricking users to install it. Ransomware: These programs encrypt the user’s files and demand payment to unlock them. Spyware: Software that monitors and gathers user information secretly. Adware or Scareware: A software serving unwanted ads on the user’s computer, mostly as pop-ups and banners. Scareware can be defined as an aggressive & deceptive adware version, “informing” users of upcoming cyber threats to “mitigate” for a fee. How Does Malware Spread? Malware will spread through different methods that includes: Phishing emails Infected hardware devices Malicious downloads Exploiting software vulnerabilities How Malware Attacks Software Development? Malware will attack software development in many ways including: Supply Chain Attacks: The supply chain targets third-party vendors and attacks the software that will be later used for attaching their customers. Software Vulnerabilities: Malware will exploit known and unknown weaknesses in software code to get unauthorized access and execute malicious code. Social Engineering Attacks: These attacks trick developers into installing malware and revealing sensitive information. Phishing Attacks: Phishing attacks engage in sending fraudulent messages or emails and trick developers into clicking on malicious links and downloading attachments. Practices to Prevent Malware Attacks Given are some of the best practices that will help to prevent malware attacks: Use Antimalware Software: Installing the antimalware application is important when protecting network devices and computers from malware infections. Use Email with Caution: Malware can be prevented by implementing safe behaviour on computers and other personal devices. Some steps include not accessing email attachments from any strange addresses that may have malware disguised as legitimate attachments. Network Firewalls – Firewalls on the router setups and connected to open Internet, enable data in and out in some circumstances. It keeps malicious traffic away from the network. System Update– Malware takes advantage of system vulnerabilities patched with time as discovered. “Zero-day” exploits take benefit of the unknown vulnerabilities, hence updating and patching any known vulnerabilities can make the system secure. It includes computers, mobile devices, and routers. How to Know You Have Malware? There are different signs your system will be infected by the malware: Changes to your search engine or homepage: Malware will change your homepage and search engine without your permission. Unusual pop-up windows: Malware will display annoying pop-up windows and alerts on your system. Strange programs and icons on the desktop. Sluggish computer performance. Trouble in shutting down and starting up the computer. Frequent and unexpected system crashes. If you find these issues present on your devices, they may be infected with malicious malware. How To Respond to Malware Attacks? The most effective security practice mainly uses the combination of the right technology and expertise to detect and respond to malware. Given below are some tried and proven methods: Security Monitoring: Certain tools are used to monitor network traffic and system activity for signs of malware. Intrusion Detection System or IDS: Detecting any suspicious activity and showing alerts. Antivirus Software: Protecting against any known malware threats. Incident Response Plan: Having a proper plan to respond to malware attacks efficiently. Regular Backups: Regular updates of significant data to reduce the impact of attacks. Conclusion The malware threat is evolving constantly, and software developers and security experts need to stay well-informed and take proactive measures. By checking out different kinds of malware, the way they attack software development, and best practices for prevention and detection, you will be able to help protect your data and system from attack and harm. FAQs What’s malware vs virus? Virus is one kind of malware and malware mainly refers to almost all code classes used to hard and disrupt your computing systems. How does the malware spread? There are a lot of malware attack vectors: installing infected programs, clicking infected links, opening malicious email attachments, and using corrupted output devices like a virus-infected USB. What action to take if your device gets infected by malware? Consider using an authentic malware removal tool for scanning your device, look for malware, and clean the infection. Restart your system and scan again to ensure the infection is removed completely. The post Understanding Malware: A Guide for Software Developers and Security Professionals appeared first on The Crazy Programmer.
  5. By: Linux.com Editorial Staff Tue, 08 Oct 2024 13:50:45 +0000 Exciting news! The Tazama project is officially a Digital Public Good having met the criteria to be accepted to the Digital Public Goods Alliance ! Tazama is a groundbreaking open source software solution for real-time fraud prevention, and offers the first-ever open source platform dedicated to enhancing fraud management in digital payments. Historically, the financial industry has grappled with proprietary and often costly solutions that have limited access and adaptability for many, especially in developing economies. This challenge is underscored by the Global Anti-Scam Alliance, which reported that nearly $1 trillion was lost to online fraud in 2022.  Tazama represents a significant shift in how financial monitoring and compliance have been approached globally, challenging the status quo by providing a powerful, scalable, and cost-effective alternative that democratizes access to advanced financial monitoring tools that can help combat fraud.  Tazama addresses key concerns of government, civil society, end users, industry bodies, and the financial services industry, including fraud detection, AML Compliance, and the cost-effective monitoring of digital financial transactions. The solution’s architecture emphasizes data sovereignty, privacy, and transparency, aligning with the priorities of governments worldwide. Hosted by LF Charities, which will support the operation and function of the project, Tazama showcases the scalability and robustness of open source solutions, particularly in critical infrastructure like national payment switches. We are thrilled to be counted alongside many other incredible open source projects working to achieve the United Nations Sustainable Development Goals.  For more information, visit the Digital Public Goods Alliance Registry. The post Project Tazama, A Project Hosted by LF Charities With Support From the Gates Foundation, Receives Digital Public Good Designation. appeared first on Linux.com.
  6. by: Always Sia Strike 2024-07-31T23:29:54-07:00 My the year is flying by. I haven’t written in a while - not for a lack of thoughts, but because time, life, probably could be time managing better but oh :whale:. We’re back though - so let’s talk work community. During this year’s Black in Data Week, there was a question during my session about how to get to know people organically and ask questions without fear when you start a new job. After sharing what has worked for me, the lady with the question came back to me with positive feedback that all the ideas were helpful. I didn’t think anything of it until Wellington, one of my friendlies from the app whose mama named it Twitter, twote this and had me thinking: He’s so right. No one is going to care about your career more than you do. However, one of the people who can make the effort to drive your development is your manager. Wellington and I had an additional exchange in which he echoed how important community is. This brought me back to June and that lady’s question during BID week - so I thought to share, in a less ephemeral format, what building a community at work looks like. About Chasing Management Before I share some tips, one sword I always fall on is - chase great management. If you can afford to extend a job search because you think you could get a better manager than the one who is offering you a job, do it. Managers are like a great orchestra during a fancy event. You don’t think about the background music when it’s playing and you’re eating your food (this is what I imagine from all those movies :joy:). But you will KNOW if it’s bad because something will sound off and irk your ears. When you are flying high and your manager is unblocking things, providing you chances to contribute, and running a smooth operation, you hardly think of them when you wake up in the morning - you just do your job. But if they’re not good at what they do, you could wake up in the morning thinking “ugh - I gotta go work with/for this person?”. It changes the temperature in the room. So if you can afford an extra two weeks on a job search to ask questions and get the best available manager on the market, consider investing in your mentals for the long term :heavy_exclamation_mark: I’m sure you’re like yeah great, Sia - how do I do that? Well not to toot toot, but here are some questions I like asking to learn a bit more about my potential new culture. Additionally, listen to one of my favorite humans and leaders, Taylor Poindexter, in this episode of the Code Newbie podcast talking about creating psychological safety at work (shout out to Saron and the team!). Taylor has been one of my champions at work and such a great manager for her team - I’m always a little envious I’m not on it :pleading_face: but I digress. Keep winning, my girl! Additionally, I’ll start here a list of the best leaders I know - either from personal experience working with and/or for them, interviewing to work on their teams, or from second hand knowledge of someone (I trust) else’s 1st hand experience. As of this writing, they will be listed with a workplace they’re currently in and only if they publicly share it on the internet. Taylor Poindexter, Engineering Manager II @ Spotify (Web, Full Stack Engineering) Angie Jones, Global VP of Developer Relations @ TBD/Block Kamana Sharma (Full Stack, Web, and Data Engineering) Nivia Henry, Director of Engineering Bryan Bischof (Data/ML/AI) Jasmine Vasandani (Data Science, Data Products) Dee Wolter (Accounting, Tax) Divya Narayanan (Engineering, ML) Dr. Russell Pierce (Data/ML/Computer Vision) Marlena Alhayani (Engineering) Andrew Cheong (Backend Engineering) - I’m still trying to convince him he’ll be the best leader ever, still an IC :joy: This is off the top of my head at 1:12am while watching a badminton match between Spain and US women round of 16, so I may have forgotten someone, my bad - will keep revisiting and updating as I remember and learn about more humans I aspire to work with. Now the kinda maybe not so good news - you cannot control your manager circumstances all the time. Reorgs, layoffs, people advancing and leaving companies happen. And if you’ve had the privilege of working with great managers, they will leave because they are top of the line so everyone wants to work with them. That’s where community matters. You can’t put all your career development eggs in one managerial basket. Noooooow let’s talk about how you can do that!! (I know, loooong tangent, but we’re getting there). Building Community at Work (Finally :roll_eyes:) Let’s start with the (should be but not always) obvious here - you are building genuine relationships. They therefore can’t be transactional. This is about creating a sustainable community that carries the load together, and not giving you tips on how to be the tick that takes from everyone without giving back. With that,… Find onboarding buddies There are people you started working on the same day with. They will likely have the most in common with you from a workplace perspective. If you happen to run into one of these folks, check in about what’s working and share tips that may have worked for you. When I first started working at my current job, I e-met Andy - a senior backend engineer. We chatted randomly in Slack the first few weeks while working on onboarding projects and found out that we would be working in sister orgs. Whenever I had questions, I’d ask him what he’s learning and every so often we’d “run into each other” in our team work slacks. Sometimes Andy would even help review PRs for me because I had to write Java, and ya girl does not live there. How sweet is that? Medium story short, that’s my work friend he a real good eng … you know the rest! Ask all the questions!! Remember that lady I told you about in the beginning? She had said (paraphrasing) Sia - I just got hired, how do I not look dumb asking questions and they just hired me? My response was they hired you for your skill on the market, not your knowledge of the company. You are expected to have a learning curve so take advantage of that to meet people by asking questions. If you have a Slack channel, activate those hidden helpers - they exist. You may know a lot about the coolest framework, but what about the review and releases process? What about how request for changes are handled? Maybe you see some code that seems off to you - it could be that it’s an intentional patch. The only way to know these idiosyncracies is to ask. I promise you someone else is also wondering, and by asking, you are Making it less scary for others to ask Increasing the knowledge sharing culture at your org/team/company Learning faster than you would if you tried to be your own hero (there’s a place and time, don’t overdo it when you’re new and waste time recreating a wheel) One of the best pieces of feedback I ever received at a workplace was that my curiosity and pace of learning is so fast. And to keep asking the questions. I’m summarizing here but that note was detailed and written so beautifully, it made me cry :sob:. It came from one of my favorite people who I have a 1:1 with in a few hours and who started out as … my first interviewer! Who interviewed you? Remember Andrew from my list of favorite leaders above? That’s who wrote that tearjerking note (one of many by the way). He was the person who gave my first technical screen when I was applying for my current job. After I got hired, I reached out and thanked him and hoped we would cross paths. And from above, you know now that he is also one of the best Slack helpers ever. Whenever I ask a question and see “Andrew is typing…”, I grab some tea and a snack because I’m about to learn something soooo well, the experience needs to be savoured. That first note to say, hey thank you for a great interview experience I made it has led to one of the best work sibling I’ve ever had. I also did the same with the recruiter and the engineering manager who did my behavioral interview. I should note - at my job, you don’t necessarily get interviewed with the teammates you’ll potentially work with. None of these folks have been my actual teammates, but we check in from time to time, and look out for each other. The manager was a machine learning engineering manager, Andrew is a backend person, I’m a data engineer - none of that matters. Community is multi-dimensional :heart: I got all my sister teams and me When you’re learning and onboarding, you get to meet your teammates and learn about your domain. It is likely your team is not working in a vacuum. Your customers are either other teams, or customers - which means you have to verify things with other teams to serve external customers. That’s a great way to form relationships. You are going to be seeing these folks a lot when you work together, you may as well set up a 1:1 for 20 minutes to meet and greet. It may not go anywhere in the beginning, but as you work on different projects, your conversations add up, you learn about each other’s ways of working and values (subconciously sometimes), and trade stories. It all adds up - that’s :sparkles: community :sparkles: Be nosy, Rosie Ok this last one is for the brave. As a hermit, I’m braver in writing vs in person so I use that to my advantage. This is an extension of asking all the questions beyond onboarding questions. You ever run into a document or see a presentation shared in a meeting, and you want to know more? You could reach out to the presenters and ask follow up questions, check in with your teammates about how said thing impacts/touches your team, or just learn something new that increases your t-shaped (breadth of) knowledge. Over time, this practice has a two-fold benefit. You get more context beyond your team which makes you more valuable in the long run because you end up living at the intersection of things and understand how everyone is connected. For me, whenever I’m in a meeting and someone says “our team is working on changing system X to start doing Y”, I’m able to see how that change affects multiple systems and teams, if there are folks who are not aware of the change who should know about it to plan ahead, and also how it changes planning for your team. This leads us back to our community thing because… You inadvertently build community by becoming someone your teammates and other teams (even leaders!) trust to translate information between squads or assist in unblocking inter-team or inter-org efforts. This is how I’ve been able to keep people in mind when thinking of projects and in turn they do the same. It also helped me get promoted as far as I’m concerned (earlier this year). You see, reader, I switched managers and teams a few months before performance review season. And the people in the room deciding on promotions were never my managers. They were all folks from other teams that I’d worked on projects with and because of the curiosity of understanding our intersections and being able to contribute to connected work, they knew enough about me to put their names on paper and say get that girl a bonus, promo, and title upgrade. I appreciate them dearly :heart: So what did we learn? All these things boil down to Finding your tribe from common contexts Leading with gratitude and having a teamwork mindset Staying curious a.k.a always be learning Play the long game and don’t be transactional in your interactions. Works every time. So as we now watch the 1500M men’s qualifiers of track and field at 3:13am, I hope you keep driving the car on your career and finding your tribe wherever it is you land. And congratulations to all your favorite Olympians!!
  7. Even on Linux, you can enjoy gaming and interact with fellow gamers via Steam. As a Linux gamer, Steam is a handy game distribution platform that allows you to install different games, including purchased ones. Moreover, with Steam, you can connect with other games and play multiplayer titles.Steam is a cross-platform game distribution platform that offers games the option of purchasing and installing games on any device through a Steam account. This post gives different options for installing Steam on Ubuntu 24.04. Different Methods of Installing Steam on Ubuntu 24.04 No matter the Ubuntu version that you use, there are three easy ways of installing Steam. For our guide, we are working on Ubuntu 24.04, and we’ve detailed the steps to follow for each method. Take a look! Method 1: Install Steam via Ubuntu Repository On your Ubuntu, Steam can be installed through the multiverse repository by following the steps below. Step 1: Add the Multiverse Repository The multiverse repository isn’t added on Ubuntu by default but executing the following command will add it. $ sudo add-apt-multiverse Step 2: Refresh the Package Index After adding the new repository, we must refresh the package index before we can install Steam. $ sudo apt update Step 3: Install Steam Lastly, install Steam from the repository by running the APT command below. $ sudo apt install steam Method 2: Install Steam as a Snap Steam is available as a snap package and you can install it by accessing the Ubuntu 24.04 App Center or by installing via command-line. To install it via GUI, use the below steps. Step 1: Search for Steam on App Center On your Ubuntu, open the App Center and search for “Steam” in the search box. Different results will open and the first one is what we want to install. Step 2: Install Steam On the search results page, click on Steam to open a window showing a summary of its information. Locate the green Install button and click on it. You will get prompted to enter your password before the installation can begin. Once you do so, a window showing the progress bar of the installation process will appear. Once the process completes, you will have Steam installed and ready for use on your Ubuntu 24.04. Alternatively, if you prefer using the command-line option to install Steam from App Center, you can do so using the snap command. Specify the package when running your command as shown below. $ sudo snap install steam On the output, the download and installation progress will be shown and once it completes, Steam will be available from your applications. You can open it and set it up for your gaming. Method 3: Download and Install the Steam Package Steam releases a .deb package for Linux and by downloading it, you can use it to install Steam. Unlike the previous methods, this method requires downloading the Steam package from its website using command line utilities such as wget or curl. Step 1: Install wget To download the Steam .deb package, we will use wget. You can skip this step if you already have it installed. Otherwise, execute the below command. $ sudo apt install wget Step 2: Download the Steam Package With wget installed, run the following command to download the Steam .deb package. $ wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb Step 3: Install Steam To install the .deb package, we will use the dpkg command below. $ sudo dpkg -i steam.deb Once Steam completes installing, verify that you can access it by searching for it on your Ubuntu 24.04. With that, you now have Steam installed on Ubuntu. Conclusion Steam is handy tool for any gamer and its cross-platform nature means you can install it on Ubuntu 24.04. we’ve given three installation methods you can use depending on your preference. Once you’ve installed Steam, configure it and create your account to start utilizing it. Happy gaming!
  8. Proxmox VE 8 is one of the best open-source and free Type-I hypervisors out there for running QEMU/KVM virtual machines (VMs) and LXC containers. It has a nice web management interface and a lot of features. One of the most amazing features of Proxmox VE is that it can passthrough PCI/PCIE devices (i.e. an NVIDIA GPU) from your computer to Proxmox VE virtual machines (VMs). The PCI/PCIE passthrough is getting better and better with newer Proxmox VE releases. At the time of this writing, the latest version of Proxmox VE is Proxmox VE v8.1 and it has great PCI/PCIE passthrough support. In this article, I am going to show you how to configure your Proxmox VE 8 host/server for PCI/PCIE passthrough and configure your NVIDIA GPU for PCIE passthrough on Proxmox VE 8 virtual machines (VMs).   Table of Contents Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Installing Proxmox VE 8 Enabling Proxmox VE 8 Community Repositories Installing Updates on Proxmox VE 8 Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard Enabling IOMMU on Proxmox VE 8 Verifying if IOMMU is Enabled on Proxmox VE 8 Loading VFIO Kernel Modules on Proxmox VE 8 Listing IOMMU Groups on Proxmox VE 8 Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Conclusion References   Enabling Virtualization from the BIOS/UEFI Firmware of Your Motherboard Before you can install Proxmox VE 8 on your computer/server, you must enable the hardware virtualization feature of your processor from the BIOS/UEFI firmware of your motherboard. The process is different for different motherboards. So, if you need any assistance in enabling hardware virtualization on your motherboard, read this article.   Installing Proxmox VE 8 Proxmox VE 8 is free to download, install, and use. Before you get started, make sure to install Proxmox VE 8 on your computer. If you need any assistance on that, read this article.   Enabling Proxmox VE 8 Community Repositories Once you have Proxmox VE 8 installed on your computer/server, make sure to enable the Proxmox VE 8 community package repositories. By default, Proxmox VE 8 enterprise package repositories are enabled and you won’t be able to get/install updates and bug fixes from the enterprise repositories unless you have bought Proxmox VE 8 enterprise licenses. So, if you want to use Proxmox VE 8 for free, make sure to enable the Proxmox VE 8 community package repositories to get the latest updates and bug fixes from Proxmox for free.   Installing Updates on Proxmox VE 8 Once you’ve enabled the Proxmox VE 8 community package repositories, make sure to install all the available updates on your Proxmox VE 8 server.   Enabling IOMMU from the BIOS/UEFI Firmware of Your Motherboard The IOMMU configuration is found in different locations in different motherboards. To enable IOMMU on your motherboard, read this article.   Enabling IOMMU on Proxmox VE 8 Once the IOMMU is enabled on the hardware side, you also need to enable IOMMU from the software side (from Proxmox VE 8). To enable IOMMU from Proxmox VE 8, you have the add the following kernel boot parameters: Processor Vendor Kernel boot parameters to add Intel intel_iommu=on, iommu=pt AMD iommu=pt   To modify the kernel boot parameters of Proxmox VE 8, open the /etc/default/grub file with the nano text editor as follows: $ nano /etc/default/grub   At the end of the GRUB_CMDLINE_LINUX_DEFAULT, add the required kernel boot parameters for enabling IOMMU depending on the processor you’re using. As I am using an AMD processor, I have added only the kernel boot parameter iommu=pt at the end of the GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/default/grub file.   Now, update the GRUB boot configurations with the following command: $ update-grub2   Once the GRUB boot configurations are updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Verifying if IOMMU is Enabled on Proxmox VE 8 To verify whether IOMMU is enabled on Proxmox VE 8, run the following command: $ dmesg | grep -e DMAR -e IOMMU   If IOMMU is enabled, you will see some outputs confirming that IOMMU is enabled. If IOMMU is not enabled, you may not see any outputs.   You also need to have the IOMMU Interrupt Remapping enabled for PCI/PCIE passthrough to work. To check if IOMMU Interrupt Remapping is enabled on your Proxmox VE 8 server, run the following command: $ dmesg | grep 'remapping'   As you can see, IOMMU Interrupt Remapping is enabled on my Proxmox VE 8 server. NOTE: Most modern AMD and Intel processors will have IOMMU Interrupt Remapping enabled. If for any reason, you don’t have IOMMU Interrupt Remapping enabled, there’s a workaround. You have to enable Unsafe Interrupts for VFIO. Read this article for more information on enabling Unsafe Interrupts on your Proxmox VE 8 server.   Loading VFIO Kernel Modules on Proxmox VE 8 The PCI/PCIE passthrough is done mainly by the VFIO (Virtual Function I/O) kernel modules on Proxmox VE 8. The VFIO kernel modules are not loaded at boot time by default on Proxmox VE 8. But, it’s easy to load the VFIO kernel modules at boot time on Proxmox VE 8. First, open the /etc/modules-load.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modules-load.d/vfio.conf   Type in the following lines in the /etc/modules-load.d/vfio.conf file. vfio vfio_iommu_type1 vfio_pci   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes.   Now, update the initramfs of your Proxmox VE 8 installation with the following command: $ update-initramfs -u -k all   Once the initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Once your Proxmox VE 8 server boots, you should see that all the required VFIO kernel modules are loaded. $ lsmod | grep vfio   Listing IOMMU Groups on Proxmox VE 8 To passthrough PCI/PCIE devices on Proxmox VE 8 virtual machines (VMs), you will need to check the IOMMU groups of your PCI/PCIE devices quite frequently. To make checking for IOMMU groups easier, I decided to write a shell script (I got it from GitHub, but I can’t remember the name of the original poster) in the path /usr/local/bin/print-iommu-groups so that I can just run print-iommu-groups command and it will print the IOMMU groups on the Proxmox VE 8 shell.   First, create a new file print-iommu-groups in the path /usr/local/bin and open it with the nano text editor as follows: $ nano /usr/local/bin/print-iommu-groups   Type in the following lines in the print-iommu-groups file: #!/bin/bash shopt -s nullglob for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do echo "IOMMU Group ${g##*/}:" for d in $g/devices/*; do echo -e "\t$(lspci -nns ${d##*/})" done; done;   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes to the print-iommu-groups file.   Make the print-iommu-groups script file executable with the following command: $ chmod +x /usr/local/bin/print-iommu-groups   Now, you can run the print-iommu-groups command as follows to print the IOMMU groups of the PCI/PCIE devices installed on your Proxmox VE 8 server: $ print-iommu-groups   As you can see, the IOMMU groups of the PCI/PCIE devices installed on my Proxmox VE 8 server are printed.   Checking if Your NVIDIA GPU Can Be Passthrough to a Proxmox VE 8 Virtual Machine (VM) To passthrough a PCI/PCIE device to a Proxmox VE 8 virtual machine (VM), it must be in its own IOMMU group. If 2 or more PCI/PCIE devices share an IOMMU group, you can’t passthrough any of the PCI/PCIE devices of that IOMMU group to any Proxmox VE 8 virtual machines (VMs). So, if your NVIDIA GPU and its audio device are on its own IOMMU group, you can passthrough the NVIDIA GPU to any Proxmox VE 8 virtual machines (VMs). On my Proxmox VE 8 server, I am using an MSI X570 ACE motherboard paired with a Ryzen 3900X processor and Gigabyte RTX 4070 NVIDIA GPU. According to the IOMMU groups of my system, I can passthrough the NVIDIA RTX 4070 GPU (IOMMU Group 21), RTL8125 2.5Gbe Ethernet Controller (IOMMU Group 20), Intel I211 Gigabit Ethernet Controller (IOMMU Group 19), a USB 3.0 controller (IOMMU Group 24), and the Onboard HD Audio Controller (IOMMU Group 25). $ print-iommu-groups   As the main focus of this article is configuring Proxmox VE 8 for passing through the NVIDIA GPU to Proxmox VE 8 virtual machines, the NVIDIA GPU and its Audio device must be in its own IOMMU group.   Checking for the Kernel Modules to Blacklist for PCI/PCIE Passthrough on Proxmox VE 8 To passthrough a PCI/PCIE device on a Proxmox VE 8 virtual machine (VM), you must make sure that Proxmox VE forces it to use the VFIO kernel module instead of its original kernel module. To find out the kernel module your PCI/PCIE devices are using, you will need to know the vendor ID and device ID of these PCI/PCIE devices. You can find the vendor ID and device ID of the PCI/PCIE devices using the print-iommu-groups command. $ print-iommu-groups   For example, the vendor ID and device ID of my NVIDIA RTX 4070 GPU is 10de:2786 and it’s audio device is 10de:22bc.   To find the kernel module a PCI/PCIE device 10de:2786 (my NVIDIA RTX 4070 GPU) is using, run the lspci command as follows: $ lspci -v -d 10de:2786   As you can see, my NVIDIA RTX 4070 GPU is using the nvidiafb and nouveau kernel modules by default. So, they can’t be passed to a Proxmox VE 8 virtual machine (VM) at this point.   The Audio device of my NVIDIA RTX 4070 GPU is using the snd_hda_intel kernel module. So, it can’t be passed on a Proxmox VE 8 virtual machine at this point either. $ lspci -v -d 10de:22bc   So, to passthrough my NVIDIA RTX 4070 GPU and its audio device on a Proxmox VE 8 virtual machine (VM), I must blacklist the nvidiafb, nouveau, and snd_hda_intel kernel modules and configure my NVIDIA RTX 4070 GPU and its audio device to use the vfio-pci kernel module.   Blacklisting Required Kernel Modules for PCI/PCIE Passthrough on Proxmox VE 8 To blacklist kernel modules on Proxmox VE 8, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the kernel modules nouveau, nvidiafb, and snd_hda_intel kernel modules (to passthrough NVIDIA GPU), add the following lines in the /etc/modprobe.d/blacklist.conf file: blacklist nouveau blacklist nvidiafb blacklist snd_hda_intel   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/blacklist.conf file.   Configuring Your NVIDIA GPU to Use the VFIO Kernel Module on Proxmox VE 8 To configure the PCI/PCIE device (i.e. your NVIDIA GPU) to use the VFIO kernel module, you need to know their vendor ID and device ID. In this case, the vendor ID and device ID of my NVIDIA RTX 4070 GPU and its audio device are 10de:2786 and 10de:22bc.   To configure your NVIDIA GPU to use the VFIO kernel module, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure your NVIDIA GPU and its audio device with the <vendor-id>:<device-id> 10de:2786 and 10de:22bc (let’s say) respectively to use the VFIO kernel module, add the following line to the /etc/modprobe.d/vfio.conf file. options vfio-pci ids=10de:2786,10de:22bc   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/vfio.conf file.   Now, update the initramfs of Proxmove VE 8 with the following command: $ update-initramfs -u -k all   Once initramfs is updated, click on Reboot to restart your Proxmox VE 8 server for the changes to take effect.   Once your Proxmox VE 8 server boots, you should see that your NVIDIA GPU and its audio device (10de:2786 and 10de:22bc in my case) are using the vfio-pci kernel module. Now, your NVIDIA GPU is ready to be passed to a Proxmox VE 8 virtual machine. $ lspci -v -d 10de:2786 $ lspci -v -d 10de:22bc   Passthrough the NVIDIA GPU to a Proxmox VE 8 Virtual Machine (VM) Now that your NVIDIA GPU is ready for passthrough on Proxmox VE 8 virtual machines (VMs), you can passthrough your NVIDIA GPU on your desired Proxmox VE 8 virtual machine and install the NVIDIA GPU drivers depending on the operating system that you’re using on that virtual machine as usual. For detailed information on how to passthrough your NVIDIA GPU on a Proxmox VE 8 virtual machine (VM) with different operating systems installed, read one of the following articles: How to Passthrough an NVIDIA GPU to a Windows 11 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Ubuntu 24.04 LTS Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a LinuxMint 21 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Debian 12 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to an Elementary OS 8 Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU to a Fedora 39+ Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on an Arch Linux Proxmox VE 8 Virtual Machine (VM) How to Passthrough an NVIDIA GPU on a Red Hat Enterprise Linux 9 (RHEL 9) Proxmox VE 8 Virtual Machine (VM)   Still Having Problems with PCI/PCIE Passthrough on Proxmox VE 8 Virtual Machines (VMs)? Even after trying everything listed in this article correctly, if PCI/PCIE passthrough still does not work for you, be sure to try out some of the Proxmox VE PCI/PCIE passthrough tricks and/or workarounds that you can use to get PCI/PCIE passthrough work on your hardware.   Conclusion In this article, I have shown you how to configure your Proxmox VE 8 server for PCI/PCIE passthrough so that you can passthrough PCI/PCIE devices (i.e. your NVIDIA GPU) to your Proxmox VE 8 virtual machines (VMs). I have also shown you how to find out the kernel modules that you need to blacklist and how to blacklist them for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine. Finally, I have shown you how to configure your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to use the VFIO kernel modules, which is also an essential step for a successful passthrough of your desired PCI/PCIE devices (i.e. your NVIDIA GPU) to a Proxmox VE 8 virtual machine (VM).   References PCI(e) Passthrough – Proxmox VE PCI Passthrough – Proxmox VE The ultimate gaming virtual machine on proxmox – YouTube
  9. Anyone can easily run multiple operating systems on one host simultaneously, provided they have VirtualBox installed. Even for Ubuntu 24.04, you can install VirtualBox and utilize it to run any supported operating system.The best part about VirtualBox is that it is open-source virtualization software, and you can install and use it anytime. Whether you are stuck on how to install VirtualBox on Ubuntu 24.04 or looking to advance with other operating systems on top of your host, this post gives you two easy methods. Two Methods of Installing VirtualBox on Ubuntu 24.04 There are different ways of installing VirtualBox on Ubuntu 24.04. For instance, you can retrieve a stable VirtualBox version from Ubuntu’s repository or add Oracle’s VirtualBox repository to install a specific version. Which method to use will depend on your requirements, and we’ve discussed the methods in the sections below. Method 1: Install VirtualBox via APT The easiest way of installing VirtualBox on Ubuntu 24.04 is by sourcing it from the official Ubuntu repository using APT. Below are the steps you should follow. Step 1: Update the Repository In every installation, the first step involves refreshing the source list to update the package index by executing the following command. $ sudo apt update Step 2: Install VirtualBox Once you’ve updated your package index, the next task is to run the install command below to fetch and install the VirtualBox package. $ sudo apt install virtualbox Step 3: Verify the Installation After the installation, use the following command to check the installed version. The output also confirms that you successfully installed VirtualBox on Ubuntu 24.04. $ VboxManage --version Method 2: Install VirtualBox from Oracle’s Repository The previous method shows that we installed VirtualBox version 7.0.14. However, if you visit the VirtualBox website, depending on when you read this post, it’s likely that the version we’ve installed may not be the latest. Although the older VirtualBox versions are okay, installing the latest version is always the better option as it contains all patches and fixes. However, to install the latest version, you must add Oracle’s repository to your Ubuntu before you can execute the install command. Step 1: Install Prerequisites All the dependencies you require before you can add the Oracle VirtualBox repository can be installed when you install the software-properties-common package. $ sudo apt install software-properties-common Step 2: Add GPG Keys GPG keys help verify the authenticity of repositories before we can add them to the system. The Oracle repository is a third-party repository, and by installing the GPG keys, it will be checked for integrity and authenticity. Here’s how you add the GPG keys. $ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - You will receive an output on your terminal showing that the key has been downloaded and installed. Step 3: Add Oracle’s VirtualBox Repository Oracle has a VirtualBox repository for all supported Operating Systems. To fetch this repository and add it to your /etc/apt/sources.list.d/, execute the following command. $ echo "deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list The output shows that a new repository entry has been created from which we will source VirtualBox when we execute the install command. Step 4: Install VirtualBox With the repository added, let’s first refresh the package index by updating it. $ sudo apt update Next, specify which VirtualBox you want to install using the below syntax. $ sudo apt install virtualbox-[version] For instance, if the latest version when reading this post is version 7.1, you would replace version in the above command with 7.1. However, ensure that the specified version is available on the VirtualBox website. Otherwise, you will get an error as you can’t install something that can’t be found. Conclusion VirtualBox is an effective way of running numerous Operating Systems on one host simultaneously. This post shares two methods of installing VirtualBox on Ubuntu 24.04. First, you can install it via APT by sourcing it from the Ubuntu repository. Alternatively, you can add the Oracle repository and specify a specific version number for the VirtualBox you want to install.
  10. In recent years, support for PCI/PCIE (i.e. GPU passthrough) has improved a lot in newer hardware. So, the regular Proxmox VE PCI/PCIE and GPU passthrough guide should work in most new hardware. Still, you may face many problems passing through GPUs and other PCI/PCIE devices on a Proxmox VE virtual machine. There are many tweaks/fixes/workarounds for some of the common Proxmox VE GPU and PCI/PCIE passthrough problems. In this article, I am going to discuss some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.   Table of Contents What to do if IOMMU Interrupt Remapping is not Supported? What to do if My GPU (or PCI/PCIE Device) is not in its own IOMMU Group? How do I Blacklist AMD GPU Drivers on Proxmox VE? How do I Blacklist NVIDIA GPU Drivers on Proxmox VE? How do I Blacklist Intel GPU Drivers on Proxmox VE? How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE? I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why? Why Disable VGA Arbitration for the GPUs and How to Do It? What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO? GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why? What is AMD Vendor Reset Bug and How to Solve it? How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine? What to do If Some Apps Crash the Proxmox VE Windows Virtual Machine? How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?. How to Update Proxmox VE initramfs? How to Update Proxmox VE GRUB Bootloader? Conclusion References   What to do If IOMMU Interrupt Remapping is not Supported? For PCI/PCIE passthrough, IOMMU interrupt remapping is essential. To check whether your processor supports IOMMU interrupt remapping, run the command below: $ dmesg | grep -i remap   If your processor supports IOMMU interrupt remapping, you will see some sort of output confirming that interrupt remapping is enabled. Otherwise, you will see no outputs. If IOMMU interrupt remapping is not supported on your processor, you will have to configure unsafe interrupts on your Proxmox VE server to passthrough PCI/PCIE devices on Proxmox VE virtual machines. To configure unsafe interrupts on Proxmox VE, create a new file iommu_unsafe_interrupts.conf in the /etc/modprobe.d directory and open it with the nano text editor as follows: $ nano /etc/modprobe.d/iommu_unsafe_interrupts.conf   Add the following line in the iommu_unsafe_interrupts.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. options vfio_iommu_type1 allow_unsafe_interrupts=1   Once you’re done, you must update the initramfs of your Proxmox VE server.   What to do if my GPU (or PCI/PCIE Device) is not in its own IOMMU Group? If your server has multiple PCI/PCIE slots, you can move the GPU to a different PCI/PCIE slot and see if the GPU is in its own IOMMU group. If that does not work, you can try enabling the ACS override kernel patch on Proxmox VE. To try enabling the ACS override kernel patch on Proxmox VE, open the /etc/default/grub file with the nano text editor as follows: $ nano /etc/default/grub   Add the kernel boot option pcie_acs_override=downstream at the end of the GRUB_CMDLINE_LINUX_DEFAULT. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect. You should have better IOMMU grouping once your Proxmox VE server boots. If your GPU still does not have its own IOMMU group, you can go one step further by using the pcie_acs_override=downstream,multifunction instead. You should have an even better IOMMU grouping.   If pcie_acs_override=downstream,multifunction results in better IOMMU grouping that pcie_acs_override=downstream, then why use pcie_acs_override=downstream at all? Well, the purpose of PCIE ACS override is to fool the kernel into thinking that the PCIE devices are isolated when they are not in reality. So, PCIE ACS override comes with security and stability issues. That’s why you should try using a less aggressive PCIE ACS override option pcie_acs_override=downstream first and see if your problem is solved. If pcie_acs_override=downstream does not work, only then you should use the more aggressive option pcie_acs_override=downstream,multifunction.   How do I Blacklist AMD GPU Drivers on Proxmox VE? If you want to passthrough an AMD GPU on Proxmox VE virtual machines, you must blacklist the AMD GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the AMD GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist radeon blacklist amdgpu   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How do I Blacklist NVIDIA GPU Drivers on Proxmox VE? If you want to passthrough an NVIDIA GPU on Proxmox VE virtual machines, you must blacklist the NVIDIA GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the NVIDIA GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist nouveau blacklist nvidia blacklist nvidiafb blacklist nvidia_drm   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How do I Blacklist Intel GPU Drivers on Proxmox VE? If you want to passthrough an Intel GPU on Proxmox VE virtual machines, you must blacklist the Intel GPU drivers and make sure that it uses the VFIO driver instead. First, open the /etc/modprobe.d/blacklist.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/blacklist.conf   To blacklist the Intel GPU drivers, add the following lines to the /etc/modprobe.d/blacklist.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. blacklist snd_hda_intel blacklist snd_hda_codec_hdmi blacklist i915   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE? To check if your GPU or desired PCI/PCIE devices are using the VFIO driver, run the following command: $ lspci -v   If your GPU or PCI/PCIE device is using the VFIO driver, you should see the line Kernel driver in use: vfio-pci as marked in the screenshot below.   I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the AMD GPU drivers is not enough, you also have to configure the AMD GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the AMD GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep radeon pre: vfio-pci softdep amdgpu pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the NVIDIA GPU drivers is not enough, you also have to configure the NVIDIA GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the NVIDIA GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep nouveau pre: vfio-pci softdep nvidia pre: vfio-pci softdep nvidiafb pre: vfio-pci softdep nvidia_drm pre: vfio-pci softdep drm pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do? At times, blacklisting the Intel GPU drivers is not enough, you also have to configure the Intel GPU drivers to load after the VFIO driver. To do that, open the /etc/modprobe.d/vfio.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/vfio.conf   To configure the Intel GPU drivers to load after the VFIO driver, add the following lines to the /etc/modprobe.d/vfio.conf file and press <Ctrl> + X followed by Y and <Enter> to save the file. softdep snd_hda_intel pre: vfio-pci softdep snd_hda_codec_hdmi pre: vfio-pci softdep i915 pre: vfio-pci   Once you’re done, you must update the initramfs of your Proxmox VE server for the changes to take effect.   Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why? In the /etc/modprobe.d/vfio.conf file, you must add the IDs of all the PCI/PCIE devices that you want to use the VFIO driver in a single line. One device per line won’t work. For example, if you have 2 GPUs that you want to configure to use the VFIO driver, you must add their IDs in a single line in the /etc/modprobe.d/vfio.conf file as follows: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>   If you want to add another GPU to the list, just append it at the end of the existing vfio-pci line in the /etc/modprobe.d/vfio.conf file as follows: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>,<GPU-3>,<GPU-3-Audio>   Never do this. Although it looks much cleaner, it won’t work. I do wish we could specify PCI/PCIE IDs this way. options vfio-pci ids=<GPU-1>,<GPU-1-Audio> options vfio-pci ids=<GPU-2>,<GPU-2-Audio> options vfio-pci ids=<GPU-3>,<GPU-3-Audio>   Why Disable VGA Arbitration for the GPUs and How to Do It? If you’re using UEFI/OVMF BIOS on the Proxmox VE virtual machine where you want to passthrough the GPU, you can disable VGA arbitration which will reduce the legacy codes required during boot. To disable VGA arbitration for the GPUs, add disable_vga=1 at the end of the vfio-pci option in the /etc/modprobe.d/vfio.conf file as shown below: options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio> disable_vga=1   What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO? Even after doing everything correctly, if your GPU still does not use the VFIO driver, you will need to try booting Proxmox VE with kernel options that disable the video framebuffer. On Proxmox VE 7.1 and older, the nofb nomodeset video=vesafb:off video=efifb:off video=simplefb:off kernel options disable the GPU framebuffer for your Proxmox VE server. On Proxmox VE 7.2 and newer, the initcall_blacklist=sysfb_init kernel option does a better job at disabling the GPU framebuffer for your Proxmox VE server. Open the GRUB bootloader configuration file /etc/default/grub file with the nano text editor with the following command: $ nano /etc/default/grub   Add the kernel option initcall_blacklist=sysfb_init at the end of the GRUB_CMDLINE_LINUX_DEFAULT. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file and make sure to update the Proxmox VE GRUB bootloader for the changes to take effect.   GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why? Once you’ve passed a GPU to a Proxmox VE virtual machine, make sure to use the Default Graphics card before you start the virtual machine. This way, you will be able to access the display of the virtual machine from the Proxmox VE web management UI, download the GPU driver installer on the virtual machine, and install it on the virtual machine. Once the GPU driver is installed on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU that you’ve passed to the virtual machine as well.   Once the GPU driver is installed on the virtual machine and the screen of the virtual machine is displayed on the monitor connected to the GPU (passed to the virtual machine), power off the virtual machine and set the Display Graphic card of the virtual machine to none. Once you’re set, the next time you power on the virtual machine, the screen of the virtual machine will be displayed on the monitor connected to the GPU (passed to the virtual machine) only, nothing will be displayed on the Proxmox VE web management UI. This way, you will have the same experience as using a real computer even though you’re using a virtual machine.   Remember, never use SPICE, VirtIO GPU, and VirGL GPU Display Graphic card on the Proxmox VE virtual machine that you’re configuring for GPU passthrough as it has a high chance of failure.   What is AMD Vendor Reset Bug and How to Solve it? AMD GPUs have a well-known bug called “vendor reset bug”. Once an AMD GPU is passed to a Proxmox VE virtual machine, and you power off this virtual machine, you won’t be able to use the AMD GPU in another Proxmox VE virtual machine. At times, your Proxmox VE server will become unresponsive as a result. This is called the “vendor reset bug” of AMD GPUs. The reason this happens is that AMD GPUs can’t reset themselves correctly after being passed to a virtual machine. To fix this problem, you will have to reset your AMD GPU properly. For more information on installing the AMD vendor reset on Proxmox VE, read this article and read this thread on Proxmox VE forum. Also, check the vendor reset GitHub page.   How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine? If you’ve installed the GPU on the first slot of your motherboard, you might not be able to passthrough the GPU in a Proxmox VE virtual machine by default. Some motherboards shadow the vBIOS of the GPU installed on the first slot by default which is the reason the GPU installed on the first slot of those motherboards can’t be passed to virtual machines. The solution to this problem is to install the GPU on the second slot of the motherboard, extract the vBIOS of the GPU, install the GPU on the first slot of the motherboard, and passthrough the GPU to a Proxmox VE virtual machine along with the extracted vBIOS of the GPU. NOTE: To learn how to extract the vBIOS of your GPU, read this article. Once you’ve obtained the vBIOS for your GPU, you must store the vBIOS file in the /usr/share/kvm/ directory of your Proxmox VE server to access it. Once the vBIOS file for your GPU is stored in the /usr/share/kvm/ directory, you need to configure your virtual machine to use it. Currently, there is no way to specify the vBIOS file for PCI/PCIE devices of Proxmox VE virtual machines from the Proxmox VE web management UI. So, you will have to do everything from the Proxmox VE shell/command-line. You can find the Proxmox VE virtual machine configuration files in the /etc/pve/qemu-server/ directory of your Proxmox VE server. Each Proxmox VE virtual machine has one configuration file in this directory in the format <VM-ID>.conf. For example, to open the Proxmox VE virtual machine configuration file (for editing) for the virtual machine ID 100, you will need to run the following command: $ nano /etc/pve/qemu-server/100.conf   In the virtual machine configuration file, you will need to append romfile=<vBIOS-filename> in the hostpciX line which is responsible for passing the GPU on the virtual machine. For example, if the vBIOS filename for my GPU is gigabyte-nvidia-1050ti.bin, and I have passed the GPU on the first slot (slot 0) of the virtual machine (hostpci0), then in the 100.conf file, the line should be as follows: hostpci0: <PCI-ID-of-GPU>,x-vga=on,romfile=gigabyte-nvidia-1050ti.bin   Once you’re done, save the virtual machine configuration file by pressing <Ctrl> + X followed by Y and <Enter>, start the virtual machine, and check if the GPU passthrough is working.   What to do if Some Apps Crash the Proxmox VE Windows Virtual Machine? Some apps such as GeForce Experience, Passmark, etc. might crash Proxmox VE Windows virtual machines. You might also experience a sudden blue screen of death (BSOD) on your Proxmox VE Windows virtual machines. The reason it happens is that the Windows virtual machine might try to access the model-specific registers (MSRs) that are not actually available and depending on how your hardware handles MSRs requests, your system might crash. The solution to this problem is ignoring MSRs messages on your Proxmox VE server. To configure MSRs on your Proxmox VE server, open the /etc/modprobe.d/kvm.conf file with the nano text editor as follows: $ nano /etc/modprobe.d/kvm.conf   To ignore MSRs on your Proxmox VE server, add the following line to the /etc/modprobe.d/kvm.conf file. options kvm ignore_msrs=1   Once MSRs are ignored, you might see a lot of MSRs warning messages in your dmesg system log. To avoid that, you can ignore MSRs as well as disable logging MSRs warning messages by adding the following line instead: options kvm ignore_msrs=1 report_ignored_msrs=0   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/modprobe.d/kvm.conf file and update the initramfs of your Proxmox VE server for the changes to take effect.   How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines? If you’ve passed the GPU to a Linux Proxmox VE virtual machine and you’re getting bad audio quality on the virtual machine, you will need to enable MSI (Message Signal Interrupt) for the audio device on the Proxmox VE virtual machine. To enable MSI on the Linux Proxmox VE virtual machine, open the /etc/modprobe.d/snd-hda-intel.conf file with the nano text editor on the virtual machine with the following command: $ sudo nano /etc/modprobe.d/snd-had-intel.conf   Add the following line and save the file by pressing <Ctrl> + X followed by Y and <Enter>. options snd-hda-intel enable_msi=1   For the changes to take effect, reboot the Linux virtual machine with the following command: $ sudo reboot   Once the virtual machine boots, check if MSI is enabled for the audio device with the following command: $ sudo lspci -vv   If MSI is enabled for the audio device on the virtual machine, you should see the marked line in the audio device information.   How to Update Proxmox VE initramfs? Every time you make any changes to files in the /etc/modules-load.d/ and /etc/modprobe.d/ directories, you must update the initramfs of your Proxmox VE 8 installation with the following command: $ update-initramfs -u -k all   Once Proxmox VE initramfs is updated, reboot your Proxmox VE server for the changes to take effect. $ reboot   How to Update Proxmox VE GRUB Bootloader? Every time you update the Proxmox VE GRUB boot configuration file /etc/default/grub, you must update the GRUB bootloader for the changes to take effect. To update the Proxmox VE GRUB bootloader with the new configurations, run the following command: $ update-grub2   Once the GRUB bootloader is updated with the new configuration, reboot your Proxmox VE server for the changes to take effect. $ reboot   Conclusion In this article, have discussed some of the most common Proxmox VE PCI/PCIE passthrough and GPU passthrough problems and the steps you can take to solve those problems.   References [TUTORIAL] – PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration | Proxmox Support Forum Ultimate Beginner’s Guide to Proxmox GPU Passthrough Reading and Writing Model Specific Registers in Linux The MSI Driver Guide HOWTO — The Linux Kernel documentation    
  11. Proxmox VE (Virtualization Environment) is an open-source enterprise virtualization and containerization platform. It has a built-in user-friendly web interface for managing virtual machines and LXC containers. It has other features such as Ceph software-defined storage (SDS), software-defined networking (SDN), high availability (HA) clustering, and many more. After the recent Broadcom acquisition of VMware, the cost of VMware products has risen to the point that many small to medium-sized companies are/will be forced to switch to alternate products. Even the free VMware ESXi is discontinued which is bad news for homelab users. Proxmox VE is one of the best alternatives to VMware vSphere and it has the same set of features as VMware vSphere (with a few exceptions of course). Proxmox VE is open-source and free, which is great for home labs as well as businesses. Proxmox VE also has an optional enterprise subscription option that you can purchase if needed. In this article, I will show you how to install Proxmox VE 8 on your server. I will cover Graphical UI-based installation methods of Proxmox VE and Terminal UI-based installation for systems having problems with the Graphical UI-based installer.   Table of Contents Booting Proxmox VE 8 from a USB Thumb Drive Installing Proxmox VE 8 using Graphical UI Installing Proxmox VE 8 using Terminal UI Accessing Proxmox VE 8 Management UI from a Web Browser Enabling Proxmox VE Community Package Repositories Keeping Proxmox VE Up-to-date Conclusion References   Booting Proxmox VE 8 from a USB Thumb Drive First, you need to download the Proxmox VE 8 ISO image and create a bootable USB thumb drive of Proxmox VE 8. If you need any assistance on that, read this article. Once you’ve created a bootable USB thumb drive of Proxmox VE 8, power off your server, insert the bootable USB thumb drive on your server, and boot the Proxmox VE 8 installer from it. Depending on the motherboard manufacturer, you need to press a certain key after pressing the power button to boot from the USB thumb drive. If you need any assistance on booting your server from a USB thumb drive, read this article. Once you’ve successfully booted from the USB thumb drive, the Proxmox VE GRUB menu should be displayed.   Installing Proxmox VE 8 using Graphical UI To install Proxmox VE 8 using a graphical user interface, select Install Proxmox VE (Graphical) from the Proxmox VE GRUB menu and press <Enter>.   The Proxmox VE installer should be displayed. Click on I agree.   Now, you have to configure the disk for the Proxmox VE installation. You can configure the disk for Proxmox VE installation in different ways: If you have a single 500GB/1TB (or larger capacity) SSD/HDD on your server, you can use it for Proxmox VE installation as well as storing virtual machine images, container images, snapshots, backups, ISO images, and so on. That’s not very safe, but you can try out Proxmox this way without needing a lot of hardware resources. You can use a small 64GB or 128GB SSD for Proxmox VE installation only. Once Proxmox VE is installed, you can create additional storage pools for storing virtual machine images, container images, snapshots, backups, ISO images, and so on. You can create a big ZFS or BTRFS RAID for Proxmox VE installation which will also be used for storing virtual machine images, container images, snapshots, backups, ISO images, and so on.   a) To install Proxmox VE on a single SSD/HDD and also use the SSD/HDD for storing virtual machine and container images, ISO images, virtual machine and container snapshots, virtual machine and container backups, etc., select the SSD/HDD from the Target Harddisk dropdown menu[1] and click on Next[2]. Proxmox VE will use a small portion of the free disk space for the Proxmox VE root filesystem and the rest of the disk space will be used for storing virtual machine and container data.   If you want to change the filesystem of your Proxmox VE installation or configure the size of different Proxmox VE partitions/storages, select the HDD/SSD you want to use for your Proxmox VE installation from the Target Harddisk dropdown menu and click on Options.   An advanced disk configuration window should be displayed. From the Filesystem dropdown menu, select your desired filesystem. ext4 and xfs filesystems are supported for single-disk Proxmox VE installation at the time of this writing[1]. Other storage configuration parameters are: hdsize[2]: By default Proxmox VE will use all the disk space of the selected HDD/SSD. To keep some disk space free on the selected HDD/SSD, type in the amount of disk space (in GB) that you want Proxmox VE to use and the rest of the disk space should be free. swapsize[3]: By default, Proxmox VE will use 4GB to 8GB of disk space for swap depending on the amount of memory/RAM you have installed on the server. To set a custom swap size for Proxmox VE, type in your desired swap size (in GB unit) here. maxroot[4]: Defines the maximum disk space to use for the Proxmox VE LVM root volume/filesystem. minfree[5]: Defines the minimum disk space that must be free in the Proxmox VE LVM volume group (VG). This space will be used for LVM snapshots. maxvz[6]: Defines the maximum disk space to use for the Proxmox VE LVM data volume where virtual machine and container data/images will be stored. Once you’re done with the disk configuration, click on OK[7].   To install Proxmox VE on disk with your desired storage configuration, click on Next.   b) To install Proxmox VE on a small SSD and create the necessary storage for the virtual machine and container data later, select the SSD from the Target Harddisk dropdown menu[1] and click on Options[2].   Set maxvz to 0 to disable virtual machine and container storage on the SSD where Proxmox VE will be installed and click on OK.   Once you’re done, click on Next.   c) To create a ZFS or BTRFS RAID and install Proxmox VE on the RAID, click on Options.   You can pick different ZFS and BTRFS RAID types from the Filesystem dropdown menu. Each of these RAID types works differently and requires a different number of disks. For more information on how different RAID types work, their requirements, features, data safety, etc, read this article.   RAID0, RAID1, and RAID10 are discussed in this article thoroughly. RAIDZ-1 and RAIDZ-2 work in the same way as RAID5 and RAID6 respectively. RAID5 and RAID6 are also discussed in this article. RAIDZ-1 requires at least 2 disks (3 disks recommended), uses a single parity, and can sustain only 1 disk failure. RAIDZ-2 requires at least 3 disks (4 disks recommended), uses double parity, and can sustain 2 disks failure. RAIDZ-3 requires at least 4 disks (5 disks recommended), uses triple parity, and can sustain 3 disks failure.   Although you can create BTRFS RAIDs on Proxmox VE, at the time of this writing, BTRFS on Proxmox VE is still in technology preview. So, I don’t recommend using it in production systems. I will demonstrate ZFS RAID configuration on Proxmox VE in this article.   To create a ZFS RAID for Proxmox VE installation, select your desired ZFS RAID type from the Filesystem dropdown menu[1]. From the Disk Setup tab, select the disks that you want to use for the ZFS RAID using the Harddisk X dropdown menus[2]. If you don’t want to use a disk for the ZFS RAID, select – do not use – from the respective Harddisk X dropdown menu[3].   From the Advanced Options tab, you can configure different ZFS filesystem parameters. ashift[1]: You can set ZFS block size using this option. The block size is calculated using the formula 2ashift. The default ashift value is 12, which is 212 = 4096 = 4 KB block size. 4KB block size is good for SSDs. If you’re using a mechanical hard drive (HDD), you need to set ashift to 9 (29 = 512 bytes) as HDDs use 512 bytes block size. compress[2]: You can enable/disable ZFS compression from this dropdown menu. To enable compression, set compression to on. To disable compression, set compression to off. When compression is on, the default ZFS compression algorithm (lz4 at the time of this writing) is used. You can select other ZFS compression algorithms (i.e. lzjb, zle, gzip, zstd) as well if you have such preferences. checksum[3]: ZFS checksums are used to detect corrupted files so that they can be repaired. You can enable/disable ZFS checksum from this dropdown menu. To enable ZFS checksum, set checksum to on. To disable ZFS checksum, set checksum to off. When checksum is on, the fletcher4 algorithm is used for non-deduped (deduplication disabled) datasets and sha256 algorithm is used for deduped (deduplication enabled) datasets by default. copies[4]: You can set the number of redundant copies of the data you want to keep in your ZFS RAID. This is in addition to the RAID level redundancy and provides extra data protection. The default number of copy is 1 and you can store 3 copies of data at max in your ZFS RAID. This feature is also known as ditto blocks. ARC max size[5]: You can set the maximum amount of memory ZFS is allowed to use for the Adaptive Replacement Cache (ARC) from here. hdsize[6]: By default all the free disk space is used for the ZFS RAID. If you want to keep some portion of the disk space of each SSD free and use the rest for the ZFS RAID, type in the disk space you want to use (in GB) here. For example, if you have 40GB disks and you want to use 35GB of each disk for the ZFS RAID and keep 5GB of disk space free on each disks, you will need to type in 35GB in here. Once you’re done with the ZFS RAID configuration, click on OK[7].   Once you’re done with the ZFS storage configuration, click on Next to continue.   Type in the name of your country[1], select your time zone[2], select your keyboard layout[3], and click on Next[4].   Type in your Proxmox VE root password[1] and your email[2]. Once you’re done, click on Next[3].   If you have multiple network interfaces available on your server, select the one you want to use for accessing the Proxmox VE web management UI from the Management Interface dropdown menu[1]. If you have only a single network interface available on your server, it will be selected automatically. Type in the domain name that you want to use for Proxmox VE in the Hostname (FQDN) section[2]. Type in your desired IP information for the Proxmox VE server[3] and click on Next[4].   An overview of your Proxmox VE installation should be displayed. If everything looks good, click on Install to start the Proxmox VE installation. NOTE: If anything seems wrong or you want to change certain information, you can always click on Previous to go back and fix it. So, make sure to check everything before clicking on Install.   The Proxmox VE installation should start. It will take a while to complete.   Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.   On the next boot, you will see the Proxmox VE GRUB boot menu.   Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt. You will also see the access URL of the Proxmox VE web-based management UI.   Installing Proxmox VE 8 using Terminal UI In some hardware, the Proxmox VE graphical installer may not work. In that case, you can always use the Proxmox VE terminal installer. You will find the same options in the Proxmox VE terminal installer as in the graphical installer. So, you should not have any problems installing Proxmox VE on your server using the terminal installer. To use the Proxmox VE terminal installer, select Install Proxmox VE (Terminal UI) from the Proxmox VE GRUB boot menu and press <Enter>.   Select <I agree> and press <Enter>.   To install Proxmox VE on a single disk, select an HDD/SSD from the Target harddisk section, select <Next>, and press <Enter>.   For advanced disk configuration or ZFS/BTRFS RAID setup, select <Advanced options> and press <Enter>.   You will find the same disk configuration options as in the Proxmox VE graphical installer. I have already discussed all of them in the Proxmox VE Graphical UI installation section. Make sure to check it out for detailed information on all of those disk configuration options. Once you’ve configured the disk/disks for the Proxmox VE installation, select <Ok> and press <Enter>.   Once you’re done with advanced disk configuration for your Proxmox VE installation, select <Next> and press <Enter>.   Select your country, timezone, and keyboard layout. Once you’re done, select <Next> and press <Enter>.   Type in your Proxmox VE root password and email address. Once you’re done, select <Next> and press <Enter>.   Configure the management network interface for Proxmox VE, select <Next>, and press <Enter>.   An overview of your Proxmox VE installation should be displayed. If everything looks good, select <Install> and press <Enter> to start the Proxmox VE installation. NOTE: If anything seems wrong or you want to change certain information, you can always select <Previous> and press <Enter> to go back and fix it. So, make sure to check everything before installing Proxmox VE.   The Proxmox VE installation should start. It will take a while to complete.   Once the Proxmox VE installation is complete, you will see the following window. Your server should restart within a few seconds.   Once Proxmox VE is booted, you will see the Proxmox VE command-line login prompt. You will also see the access URL of the Proxmox VE web-based management UI.   Accessing Proxmox VE 8 Management UI from a Web Browser To access the Proxmox VE web-based management UI from a web browser, you need a modern web browser (i.e. Google Chrome, Microsoft Edge, Mozilla Firefox, Opera, Apple Safari). Open a web browser of your choice and visit the Proxmox VE access URL (i.e. https://192.168.0.105:8006) from the web browser. By default, Proxmox VE uses a self-signed SSL certificate which your web browser will not trust. So, you will see a similar warning. To accept the Proxmox VE self-signed SSL certificate, click on Advanced.   Then, click on Accept the Risk and Continue.   You will see the Proxmox VE login prompt. Type in your Proxmox VE login username (root) and password[1] and click on Login[2].   You should be logged in to your Proxmox VE web-management UI. As you’re using the free version of Proxmox VE, you will see a No valid subscription warning message every time you log in to Proxmox VE. To ignore this warning and continue using Proxmox VE for free, just click on OK.   The No valid subscription warning should be gone. Proxmox VE is now ready to use.   Enabling Proxmox VE Community Package Repositories If you want to use Proxmox VE for free, after installing Proxmox VE on your server, one of the first things you want to do is disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories. This way, you can get access to the Proxmox VE package repositories for free and keep your Proxmox VE server up-to-date. To learn how to enable the Proxmox VE community package repositories, read this article.   Keeping Proxmox VE Up-to-date After installing Proxmox VE on your server, you should check if new updates are available for your Proxmox VE server. If new updates are available, you should install them as it will improve the performance, stability, and security of your Proxmox VE server. For more information on keeping your Proxmox VE server up-to-date, read this article.   Conclusion In this article, I have shown you how to install Proxmox VE on your server using the Graphical installer UI and the Terminal installer UI. The Proxmox VE Terminal installer UI installer is for systems that don’t support the Proxmox VE Graphical installer UI. So, if you’re having difficulty with the Proxmox VE Graphical installer UI, the Terminal installer UI will still work and save your day. I have also discussed and demonstrated different disk/storage configuration methods for Proxmox VE as well as configuring ZFS RAID for Proxmox VE and installing Proxmox VE on the ZFS RAID as well.   References RAIDZ Types Reference ZFS/Virtual disks – ArchWiki ZFS Tuning Recommendations | High Availability The copies Property Checksums and Their Use in ZFS — OpenZFS documentation ZFS ARC Parameters – Oracle Solaris Tunable Parameters Reference Manual
  12. Most of the operating system distributes their installer program in ISO image format. So, the most common way of installing an operating system on a Proxmox VE virtual machine is using an ISO image of that operating system. You can obtain the ISO image file of your favorite operating systems from their official website. To install your favorite operating system on a Proxmox VE virtual machine, the ISO image of that operating system must be available in a proper storage location on your Proxmox VE server. The Proxmox VE storage that supports ISO image files has a section ISO Images and has options for uploading and downloading ISO images.   In this article, I will show you how to upload an ISO image to your Proxmox VE server from your computer. I will show you how to download an ISO image directly on your Proxmox VE server using the download links or URL of that ISO image.   Table of Contents Uploading an ISO Image on Proxmox VE Server from Your Computer Downloading an ISO Image on Proxmox VE Server using URL Conclusion   Uploading an ISO Image on Proxmox VE Server from Your Computer To upload an ISO image on your Proxmox VE server from your computer, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Upload.   Click on Select File from the Upload window.   Select the ISO image file that you want to upload on your Proxmox VE server from the filesystem of your computer[1] and click on Open[2].   Once the ISO image file is selected, the ISO image file name will be displayed in the File name section. If you want, you can modify the ISO image file name which will be stored on your Proxmox VE server once it’s uploaded[1]. The size of the ISO image file will be displayed in the File size section[2]. Once you’re ready to upload the ISO image on your Proxmox VE server, click on Upload[3].   The ISO image file is being uploaded to the Proxmox VE server. It will take a few seconds to complete. If for some reason you want to stop the upload process, click on Abort.   Once the ISO image file is uploaded to your Proxmox VE server, you will see the following window. Just close it.   Shortly, the ISO image that you’ve uploaded to your Proxmox VE server should be listed in the ISO Images section of the selected Proxmox VE storage.   Downloading an ISO Image on Proxmox VE Server using URL To upload an ISO image on your Proxmox VE server using a URL or download link, visit the official website of the operating system that you want to download and copy the download link or URL of the ISO image from the website. For example, to download the ISO image of Debian 12, visit the official website of Debian from a web browser[1], right-click on Download, and click on Copy Link[2].   Then, navigate to the ISO Images section of an ISO image-supported storage from the Proxmox VE web management UI and click on Download from URL.   Paste the download link or URL of the ISO image in the URL section and click on Query URL.   Proxmox VE should check the ISO file URL and obtain the necessary information like the File name[1] and File size[2] of the ISO image file. If you want to save the ISO image file in a different name on your Proxmox VE server, just type it in the File name section[1]. Once you’re ready, click on Download[3].   Proxmox VE should start downloading the ISO image file from the URL. It will take a while to complete.   Once the ISO image file is downloaded on your Proxmox VE server, you will see the following window. Just close it.   The downloaded ISO image file should be listed in the ISO Images section of the selected Proxmox VE storage.   Conclusion In this article, I have shown you how to upload an ISO image from your computer on the Proxmox VE server. I have also shown you how to download an ISO image using a URL directly on your Proxmox VE server.
  13. Keeping your Proxmox VE server up-to-date is important as newer updates come with bug fixes and improved security. If you’re using the Proxmox VE community version (the free version of Proxmox VE without an enterprise subscription), installing new updates will also add new features to your Proxmox VE server as they are released. In this article, I am going to show you how to check if new updates are available on your Proxmox VE server. If updates are available, I will also show you how to install the available updates on your Proxmox VE server.   Table of Contents Enabling the Proxmox VE Community Package Repositories Checking for Available Updates on Proxmox VE Installing Available Updates on Proxmox VE Conclusion   Enabling the Proxmox VE Community Package Repositories If you don’t have an enterprise subscription on your Proxmox VE server, you need to disable the Proxmox VE enterprise package repositories and enable the Proxmox VE community package repositories to receive software updates on your Proxmox VE server. If you want to keep using Proxmox VE for free, make sure to enable the Proxmox VE community package repositories.   Checking for Available Updates on Proxmox VE To check if new updates are available on your Proxmox VE server, log in to your Proxmox VE web-management UI, navigate to the Updates section of your Proxmox VE server, and click on Refresh.   If you’re using the Proxmox VE community version (free version), you will see a No valid subscription warning. Click on OK to ignore the warning.   The Proxmox VE package database should be updated. Close the Task viewer window.   If newer updates are not available, then you will see the No updates available message after the Proxmox VE package database is updated.   If newer updates are available for your Proxmox VE server, you will see a list of packages that can be updated as shown in the screenshot below.   Installing Available Updates on Proxmox VE To install all the available updates on your Proxmox VE server, click on Upgrade.   A new NoVNC window should be displayed. Press Y and then press <Enter> to confirm the installation.   The Proxmox VE updates are being downloaded. It will take a while to complete.   The Proxmox VE updates are being installed. It will take a while to complete.   At this point, the Proxmox VE updates should be installed. Close the NoVNC window.   If you check for Proxmox VE updates, you should see the No updates available message. Your Proxmox VE server should be up-to-date[1]. After the updates are installed, it’s best to reboot your Proxmox VE server. To reboot your Proxmox VE server, click on Reboot[2].   Conclusion In this article, I have shown you how to check if new updates are available for your Proxmox VE server. If new updates are available, I have also shown you how to install the available updates on your Proxmox VE server. You should always keep your Proxmox VE server up-to-date so that you get the latest bug fixes and security updates.
  14. The full form of SR-IOV is Single Root I/O Virtualization. Some PCI/PCIE devices have multiple virtual functions and each of these virtual functions can be passed to a different virtual machine. SR-IOV is the technology that allows this type of PCI/PCIE passthrough. For example, an 8-port SR-IOV capable network card has 8 virtual functions, 1 for each port. 8 of these virtual functions or network ports can be passed to 8 different virtual machines (VMs). In this article, we will show you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   Table of Contents How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards Conclusion References   How to Enable SR-IOV from the BIOS/UEFI Firmware of ASUS Motherboards To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”. To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”. For both AMD and Intel systems, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”. To save the changes, press <F10>, select OK, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of ASRock Motherboards To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer. If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”. If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”. You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard. If you have an AMD processor, navigate to “PCI Configuration” and set “SR-IOV Support” to “Enabled”. If you have an Intel processor, navigate to “Chipset Configuration” and set “SR-IOV Support” to “Enabled”. To save the changes, press <F10>, select Yes, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of MSI Motherboards To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the SR-IOV on your MSI motherboard, you have to enter the “Advanced Mode”. To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”. From the “Advanced Mode”, navigate to “Settings”. If you’re using an AMD processor, navigate to “Advanced” > “PCI Subsystem Settings” and set “SR-IOV Support” to “Enabled”. If you’re using an Intel processor, navigate to “Advanced” > “PCIe/PCI Sub-system Settings” and set “SR-IOV Support” to “Enabled”. NOTE: You may not find the “SR-IOV Support” option in the BIOS/UEFI firmware of your MSI motherboard. In that case, you can try updating the BIOS/UEFI firmware version and see if the option is available. To save the changes, press <F10>, select Yes, and press <Enter>. The SR-IOV feature should be enabled. For more information on enabling the SR-IOV feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.   How to Enable SR-IOV from the BIOS/UEFI Firmware of Gigabyte Motherboards To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”. To enable SR-IOV, you have to switch to “Advanced Mode”. If you’re in “Easy Mode”, press <F2> to switch to “Advanced Mode”. If you have an AMD processor, navigate to the “Settings” tab, navigate to “IO Ports”, and set “SR-IOV Support” to “Enabled”. If you have an Intel processor, navigate to the “Advanced” tab, navigate to “PCI Subsystem Settings”, and set “SR-IOV Support” to “Enabled”. NOTE: On newer Gigabyte motherboards (i.e. Z590, Z690, Z790), the SR-IOV option might be missing. In that case, try enabling Intel virtualization technology VT-x/VT-d and see if the SR-IOV option is displayed on the BIOS/UEFI firmware of your Gigabyte motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. SR-IOV should be enabled for your processor. For more information on enabling SR-IOV on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.   Conclusion We showed you how to enable the SR-IOV CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   References ASUS ROG Maximus Z690 Hero BIOS Overview ASUS ROG STRIX X570-E Gaming WIFI II BIOS Walk Thru ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF] ASRock Z690 Taichi BIOS Overview SR-IOV on MSI X470 Gaming Pro | MSI Global English Forum Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670] ASUS PRIME Z490-A BIOS Overview  
  15. The full form of IOMMU is Input Output Memory Management Unit. IOMMU maps the virtual addresses of a device to physical addresses which allows the device to be passed to a virtual machine (VM). VT-D does the same thing as IOMMU. The main difference is that IOMMU is developed by AMD while VT-D is developed by Intel. In this article, we will show you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   Table of Contents How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards Conclusion References   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASUS Motherboards To enter the BIOS/UEFI Firmware of your ASUS motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of ASUS motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your ASUS motherboard, you will be in “EZ Mode” by default. To enable IOMMU/VT-d on your ASUS motherboard, you have to enter the “Advanced Mode”. To enter “Advanced Mode”, press <F7> while you’re in “EZ Mode”. If you have an AMD processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “AMD CBS”, and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASUS motherboard. If you have an Intel processor, navigate to the “Advanced” tab (by pressing the arrow keys), navigate to “System Agent (SA) Configuration”, set “VT-d” to “Enabled”, and set “Control Iommu Pre-boot Behavior” to “Enable IOMMU during boot” from the BIOS/UEFI Firmware of your ASUS motherboard. To save the changes, press <F10>, select OK, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your ASUS motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of ASRock Motherboards To enter the BIOS/UEFI Firmware of your ASRock motherboard, press <F2> or <Delete> right after pressing the power button of your computer. If you’re using a high-end ASRock motherboard, you may find yourself in “Easy Mode” once you enter the BIOS/UEFI Firmware of your ASRock motherboard. In that case, press <F6> to switch to “Advanced Mode”. If you’re using a cheap/mid-range ASRock motherboard, you may not have an “Easy Mode”. You will be taken to “Advanced Mode” directly. In that case, you won’t have to press <F6> to switch to “Advanced Mode”. You will be in the “Main” tab by default. Press the <Right> arrow key to navigate to the “Advanced” tab of the BIOS/UEFI Firmware of your ASRock motherboard. If you have an AMD processor, navigate to “AMD CBS” > “NBIO Common Options” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard. If you have an Intel processor, navigate to “Chipset Configuration” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your ASRock motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your AsRock motherboard, check the BIOS Manual of your ASUS motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of MSI Motherboards To enter the BIOS/UEFI Firmware of your MSI motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of MSI motherboards has two modes: “EZ Mode” and “Advanced Mode”. Once you’ve entered the BIOS/UEFI Firmware of your MSI motherboard, you will be in “EZ Mode” by default. To enable the IOMMU/VT-d on your MSI motherboard, you have to enter the “Advanced Mode”. To enter the “Advanced Mode”, press <F7> while you’re in “EZ Mode”. Navigate to “OC settings”, scroll down to “CPU Features”, and press <Enter>. If you have an AMD processor, set “IOMMU” to “Enabled”. If you have an Intel processor, set “Intel VT-D Tech” to “Enabled”. To save the changes, press <F10>, select Yes, and press <Enter>. The IOMMU/VT-d feature should be enabled. For more information on enabling the IOMMU/VT-d feature from the BIOS/UEFI Firmware of your MSI motherboard, check the BIOS Manual of your MSI motherboard.   How to Enable IOMMU/VT-d from the BIOS/UEFI Firmware of Gigabyte Motherboards To enter the BIOS/UEFI Firmware of your Gigabyte motherboard, press <Delete> right after pressing the power button of your computer. The BIOS/UEFI Firmware of Gigabyte motherboards has two modes: “Easy Mode” and “Advanced Mode”. To enable IOMMU/VT-d, you have to switch to the “Advanced Mode” of the BIOS/UEFI Firmware of your Gigabyte motherboard. If you’re in “Easy Mode”, you can press <F2> to switch to “Advanced Mode” on the BIOS/UEFI Firmware of your Gigabyte motherboard. Use the arrow keys to navigate to the “Settings” tab. If you have an AMD processor, navigate to “Miscellaneous” and set “IOMMU” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard. If you have an Intel processor, navigate to “Miscellaneous” and set “VT-d” to “Enabled” from the BIOS/UEFI Firmware of your Gigabyte motherboard. To save the changes, press <F10>, select Yes, and press <Enter>. IOMMU/VT-d should be enabled for your processor. For more information on enabling IOMMU/VT-d on your Gigabyte motherboard, we recommend you to read the “User Manual” or “BIOS Setup Manual” of your Gigabyte motherboard.   Conclusion We showed you how to enable the IOMMU/VT-d CPU feature from the BIOS/UEFI firmware of some of the most popular desktop motherboards (i.e. ASUS, ASRock, MSI, and Gigabyte).   References ROG STRIX Z690 series BIOS Manual ( English Edition ) ASUS ROG Maximus Z690 Hero BIOS Overview ROG STRIX X670E Series BIOS Manual ( English Edition ) ASRock Bios Optimization! [AMD 7800X3D | X670E Taichi Carrara | XMP PC 5600 CL28 G.Skill | 4090HOF] ASRock Z690 Taichi BIOS Overview MSI MEG Z690 ACE BIOS Overview Pomoc techniczna cz. 1 – Ustawianie optymalne biosu i OC w płycie głównej MSI B450 Gaming Plus Max Bios Settings 7950x3D 7800x3D [Gigabyte Aorus Elite Ax x670] GIGABYTE Z690 Aorus Elite DDR4 Motherboard BIOS Overview  
  16. Proxmox VE 8 is the latest version of the Proxmox Virtual Environment. Proxmox VE is an open-source enterprise Type-I virtualization and containerization platform. In this article, I am going to show you how to download the ISO image of Proxmox VE 8 and create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11 and Linux so that you can use it to install Proxmox VE 8 on your server and run virtual machines (VMs) and LXC containers.   Table of Contents Downloading the Proxmox VE 8 ISO Image Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11 Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux Conclusion   Downloading the Proxmox VE 8 ISO Image To download the ISO image of Proxmox VE 8, visit the official downloads page of Proxmox VE from your favorite web browser. Once the page loads, click on Download from the Proxmox VE ISO Installer section.   Your browser should start downloading the Proxmox VE 8 ISO image. It will take a while to complete.   At this point, the Proxmox VE 8 ISO image should be downloaded.   Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Windows 10/11 On Windows 10/11, you can use Rufus to create bootable USB thumb drives of different operating systems. To download Rufus, visit the official website of Rufus from your favorite web browser. Once the page loads, click on the Rufus download link as marked in the screenshot below.   Rufus should be downloaded.   Insert a USB thumb drive in your computer and double-click on the Rufus app file from the Downloads folder of your Windows 10/11 system to start Rufus.   Click on Yes.   Click on No.   Rufus should start. First, select your USB thumb drive from the Device dropdown menu[1]. Then, click on SELECT to select the Proxmox VE 8 ISO image[2].   Select the Proxmox VE 8 ISO image from the Downloads folder of your Windows 10/11 system using the file picker[1] and click on Open[2].   Click on OK.   Click on START.   Click on OK.   Click on OK. NOTE: The contents of the USB thumb drive will be removed. So, make sure to move important files before you click on OK.   The Proxmox VE 8 ISO image is being written to the USB thumb drive. It will take a while to complete.   Once the Proxmox VE ISO image is written to the USB thumb drive, click on CLOSE. Your USB thumb drive should be ready for installing Proxmox VE 8 on your server.   Creating a Bootable USB Thumb Drive of Proxmox VE 8 on Linux On Linux, you can use the dd tool to create a bootable USB thumb drive of different operating systems from ISO image. First, insert a USB thumb drive in your computer and run the following command to find the device name of your USB thumb drive. $ sudo lsblk -e7   In my case, the device name of my 32GB USB thumb drive is sda as you can see in the screenshot below.   Navigate to the Downloads directory of your Linux system and you should find the Proxmox VE 8 ISO image there. $ cd ~/Downloads $ ls -lh   To write the Proxmox VE 8 ISO image proxmox-ve_8.1-2.iso to the USB thumb drive sda, run the following command: $ sudo dd if=proxmox-ve_8.1-2.iso of=/dev/sda bs=1M status=progress conv=noerror,sync NOTE: The contents of the USB thumb drive will be erased. So, make sure to move important files before you run the command above.   The Proxmox VE 8 ISO image is being written to the USB thumb drive sda. It will take a while to complete.   At this point, the Proxmox VE 8 ISO image should be written to the USB thumb drive.   To safely remove the USB thumb drive from your computer, run the following command: $ sudo eject /dev/sda   Your USB thumb drive should be ready for installing Proxmox VE 8 on any server.   Conclusion In this article, I have shown you how to download the ISO image of Proxmox VE 8. I have also shown you how download Rufus and use it to create a bootable USB thumb drive of Proxmox VE 8 on Windows 10/11. I have shown you how to create a bootable USB thumb drive of Proxmox VE 8 on Linux using the dd command as well.
  17. In a lab environment, lots of new users will be using JupyterHub. The default Authenticator of JupyterHub allows only the Linux system users to log in to JupyterHub. So, if you want to create a new JupyterHub user, you will have to create a new Linux user. Creating new Linux users manually might be a lot of hassle for you. Instead, you can configure JupyterHub to use FirstUseAuthenticator. FirstUseAuthenticator as the name says, automatically creates a new user while logging in to JupyterHub for the first time. After the user is created, the same username and password can be used to log in to JupyterHub. In this article, I am going to show you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I am also going to show you how to configure JupyterHub to use the FirstUseAuthenticator. NOTE: If you don’t have JupyterHub installed on your computer, you can read one of the articles depending on the Linux distribution you’re using: How to Install the Latest Version of JupyterHub on Ubuntu 22.04 LTS/ Debian 12/Linux Mint 21 How to Install the Latest Version of JupyterHub on Fedora 38+/RHEL 9/Rocky Linux 9   Table of Contents: Creating a Group for JupyterHub Users Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment Configuring JupyterHub FirstUseAuthenticator Restarting the JupyterHub Service Verifying if JupyterHub FirstUseAuthenticator is Working Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator Conclusion References   Creating a Group for JupyterHub Users: I want to keep all the new JupyterHub users in a Linux group jupyterhub-users for easier management. You can create a new Linux group jupyterhub-users with the following command: $ sudo groupadd jupyterhub-users   Installing JupyterHub FirstUseAuthenticator on the JupyterHub Virtual Environment: If you’ve followed my JupyterHub Installation Guide to install JupyterHub on your favorite Linux distributions (Debian-based and RPM-based), you can install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment with the following command: $ sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub-firstuseauthenticator   The JupyterHub FirstUseAuthenticator should be installed on the JupyterHub virtual environment.   Configuring JupyterHub FirstUseAuthenticator: To configure the JupyterHub FirstUseAuthenticator, open the JupyterHub configuration file jupyterhub_config.py with the nano text editor as follows: $ sudo nano /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py   Type in the following lines in the jupyterhub_config.py configuration file. # Configure FirstUseAuthenticator for Jupyter Hub from jupyterhub.auth import LocalAuthenticator from firstuseauthenticator import FirstUseAuthenticator LocalAuthenticator.create_system_users = True LocalAuthenticator.add_user_cmd = ['useradd', '--create-home', '--gid', 'jupyterhub_users' , '--shell', '/bin/bash'] FirstUseAuthenticator.dbm_path = '/opt/jupyterhub/etc/jupyterhub/passwords.dbm' FirstUseAuthenticator.create_users = True class LocalNativeAuthenticator(FirstUseAuthenticator, LocalAuthenticator): pass c.JupyterHub.authenticator_class = LocalNativeAuthenticator   Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the jupyterhub_config.py file.   Restarting the JupyterHub Service: For the changes to take effect, restart the JupyterHub systemd service with the following command: $ sudo systemctl restart jupyterhub.service   If the JupyterHub configuration file has no errors, the JupyterHub systemd service should run just fine.   Verifying if JupyterHub FirstUseAuthenticator is Working: To verify whether the JupyterHub FirstUseAuthenticator is working, visit JupyterHub from your favorite web browser and try to log in as a random user with a short and easy password like 123, abc, etc. You should see the marked error message that the password is too short and the password should be at least 7 characters long. It means that the JupyterHub FirstUseAuthenticator is working just fine.   Creating New JupyterHub Users using JupyterHub FirstUseAuthenticator: To create a new JupyterHub user using the FirstUseAuthenticator, visit the JupyterHub login page from a web browser, type in your desired login username and the password that you want to set for the new user, and click on Sign in.   A new JupyterHub user should be created and your desired password should be set for the new user. Once the new user is created, the newly created user should be logged into his/her JupyterHub account.   The next time you try to log in as the same user with a different password, you will see the error Invalid username or password. So, once a user is created using the FirstUseAuthenticator, only that user can log in with the same username and password combination. No one else can replace this user account.   Conclusion: In this article, I have shown you how to install the JupyterHub FirstUseAuthenticator on the JupyterHub Python virtual environment. I have also shown you how to configure JupyterHub to use the FirstUseAuthenticator.   References: firstuseauthenticator/firstuseauthenticator/firstuseauthenticator.py at main · jupyterhub/firstuseauthenticator · GitHub  
  18. MySQL is a reliable and widely used DBMS that utilizes SQL and a relational model to manage data. MySQL is installed as part of LAMP in Linux, but you can install it separately.Even in Ubuntu 24.04, installing MySQL is straightforward. This guide outlines the steps to follow. Read on! Step-By-Step Guide to Install MySQL on Ubuntu 24.04 If you have a user account on your Ubuntu 24.04 and have sudo privileges, installing MySQL requires you to follow the procedure below. Step 1: Update the System’s Repository When installing packages on Ubuntu, you should update the system’s repository to refresh the sources list. Doing so ensures the MySQL package you install is the latest stable version. $ sudo apt update Step 2: Install MySQL Server Once the package index updates, the next step is to install the MySQL server package using the below command. $ sudo apt install mysql-server After the installation, start the MySQL service on your Ubuntu 24.04. $ sudo systemctl start mysql.service Step 3: Configure MySQL Before we can start working with MySQL, we need to make a couple of configurations. First, access the MySQL shell using the command below. $ sudo mysql Once the shell opens up, set a password for your ’root’ using the below syntax. ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘your_password’; We’ve also specified to use the mysql_native_password authentication method. Exit the MySQL shell. exit; Step 4: Run the MySQL Script One interesting feature of MySQL is that it offers a script that you should run to quickly set it up. The script prompts you to specify different settings based on your preference. For example, you will be prompted to set a password for the root user. Go through each prompt and respond accordingly. $ sudo mysql_secure_installation Step 5: Modify the Authentication Method After successfully running the MySQL installation script, you should change the authentication method and set it to use the auth_socket plugin. Start by accessing your MySQL shell using the root account. $ mysql -u root -p Once logged in, run the below command to modify the authentication plugin. ALTER USER ‘root’@’localhost’ IDENTIFIED WITH auth_socket; Step 6: Create a MySQL User So far, we have only access to MySQL using the root account. We should create a new user and specify what privileges they should have. When creating a new user, you must add their username and the login password using the syntax below. create user ‘username’@’localhost’ IDENTIFIED BY ‘password’; Now that the user is created, we need to specify what privileges the user has when using MySQL. For instance, you can give them privileges, such as CREATE, ALTER, etc., on a specific or all the databases. Here’s an example where we’ve specified a few privileges to the added user on all available databases. Feel free to specify whichever privileges are ideal for your user. GRANT CREATE, ALTER, INSERT, UPDATE, SELECT on *.* TO ‘username’@’localhost’ WITH GRANT OPTION; For the new user and the privileges to apply, flush the privileges and exit MySQL. flush privileges; Step 7: Confirm the Created User As the last step, we should verify that our user can access the database and has the specified privileges. Start by checking the MySQL service to ensure it is running. $ sudo systemctl status mysql Next, access MySQL using the credentials of the user you added in the previous step. $ mysql -u username -p A successful login confirms that you’ve successfully installed MySQL, configured it, and added a new user. Conclusion MySQL is a relational DBMS widely used for various purposes. It supports SQL in managing data, and this post discusses all the steps you should follow to install it on Ubuntu 24.04. Hopefully, you’ve installed MySQL on your Ubuntu 24.04 with the help of the covered steps.
  19. Task Manager is an app on the Windows 10/11 operating system that is used to monitor the running apps and services of your Windows 10/11 operating system. The Task Manager app is also used for monitoring the CPU, memory, disk, network, GPU, and other hardware usage information. A few screenshots of the Windows Task Manager app are shown below: In this article, I am going to show you 6 different ways of opening the Task Manager app on Windows 10/11.   Table of Contents: Opening the Task Manager App from the Start Menu Opening the Task Manager App from the Windows Taskbar Opening the Task Manager App from Run Window Opening the Task Manager App from the Command Prompt/Terminal Opening the Task Manager App from the Windows Logon Menu Opening the Task Manager app Using the Keyboard Shortcut   1. Opening the Task Manager App from the Start Menu Search for the term app:task in the Start Menu and click on the Task Manager app from the search result as marked in the screenshot below. The Task Manager app should be opened.   2. Opening the Task Manager App from the Windows Taskbar Right-click (RMB) on an empty location of the Windows taskbar and click on Task Manager. The Task Manager app should be opened.   3. Opening the Task Manager App from Run Window To open the Run window, press <Windows> + Run. In the Run window, type in taskmgr in the Open section[1] and click on OK[2]. The Task Manager app should be opened.   4. Opening the Task Manager App from the Command Prompt/Terminal To open the Terminal app, right-click (RMB) on the Start Menu and click on Terminal.   The Terminal app should be opened. Type in the command taskmgr and press <Enter>. The Task Manager app should be opened.   5. Opening the Task Manager App from the Windows Logon Menu To open the Windows logon menu, press <Ctrl> + <Alt> + <Delete>. From the Windows logon menu, click on Task Manager. The Task Manager app should be opened.   6. Opening the Task Manager app Using the Keyboard Shortcut To Windows 10/11 Task Manager app can be opened with the keyboard shortcut <Ctrl> + <Shift> + <Escape>.   Conclusion: In this article, I have shown you how to open the Task Manager app on Windows 10/11 in 6 different ways. Feel free to use the method you like the best.
  20. Python and R programming languages rely on Anaconda as their package and environment manager. With Anaconda, you will get tons of the necessary packages for your data science, machine learning, or other computational tasks.To utilize Anaconda on Ubuntu 24.04, install the conda utility for your Python flavor. This post shares the steps for installing conda for Python 3, and we will install version 2024.2-1. Read on! How to Install conda n Ubuntu 24.04 Anaconda is an open-source platform and by installing conda, you will have access to it and use it for any scientific computational tasks, such as machine learning. The beauty of Anaconda lies in its numerous scientific packages, ensuring you can freely use it for your project needs. Installing conda on Ubuntu 24.04 follows a series of steps, which we’ve discussed in detail. Step 1: Downloading the Anaconda Installer When installing Anaconda, you should check and use the latest version of the installer script. You can access all the latest Anaconda3 installer scripts from the Anaconda Downloads Page. As of writing this post, we have version 2024.2-1 as the latest version, and we can go ahead and download it using curl. $ curl https://repo.anaconda.com/archive/Anaconda3-2024.2-1-Linux-x86_64.sh --output anaconda.sh Ensure you change the version when using the above command. Also, navigate to where you want the installer script to be saved. In the above command, we’ve specified to save the installer as anaconda.sh, but you can use any preferred name. The installer script is large and will take some time, depending on your network’s performance. Once the download is completed, verify the file is available using the ls command. Another crucial thing is to check the integrity of the installer script. To do so, we’ve used the SHA-256 checksum by running the below command. $ sha256sum anaconda.sh Once you get the output, confirm that it matches against the available Anaconda3 hashes from the website. Once everything checks out, you can proceed with the installation. Step 2: Run the conda Installer Script Anaconda has an installer script that will take you through installing it. To run the bash script, execute the below command. $ bash anaconda.sh The script will trigger different prompts that will walk you through the installation. For instance, you must press the Enter key to confirm that you are okay with the installation. Next, a document containing the lengthy Anaconda license agreement will open. Please go through it, and once you reach the bottom, type yes to confirm that you agree with the license terms. You must also specify where you want the installation to be installed. By default, the script selects a location in your home directory, which is okay in some instances. However, if you prefer a different location, specify it and press the enter key again to proceed with the process. Conda will start installing, and the process will take a few minutes. In the end, you will get prompted to initialize Anaconda3. If you wish to initialize it later, choose ‘no.’ Otherwise, type ‘yes,’ as in our case. That’s it! You will get an output thanking you for installing Anaconda3. This message confirms that the conda utility was installed successfully on Ubuntu 24.04, and you now have the green light to start using it. Step 3: Activate the Installation and Test Anaconda3 Start by sourcing the ~/.bashrc with the below command. $ source ~/.bashrc Next, restart your shell to open up in the Anaconda3 base environment. You can now check the installed conda version. $ conda --version Better yet, you can view all the available packages by listing them using the command below. $ conda list With that, you’ve installed Conda on Ubuntu 24.04. You can start working on your projects and maximize the power of Anaconda3 courtesy of its multiple packages. Conclusion Anaconda is installed by installing the conda command-line utility. To install conda, you must download its installer script, execute it, go through the installation prompts, and agree to the license terms. Once you complete the process, you can use Anaconda3 for your projects and leverage all the packages it offers.
  21. Now that you have Ubuntu 24.04 installed, the remaining task is ensuring that you install all the software you need, including Java. Installing Java on Ubuntu 24.04 makes it possible to develop and run Java applications, and as a Java programmer, you will inevitably install Java on Ubuntu.Java isn’t pre-installed on Ubuntu. As such, you must know what steps are required to quickly install Java before you start using it for your projects. Reading this post will arm you with a simple procedure to install Java on Ubuntu 24.04. Java JDK vs JRE When installing Java on Ubuntu 24.04, a common concern is understanding the difference between JDK and JRE and knowing which to install. Here’s the thing: Java Development Kit (JDK) comprises all the required tools to develop Java applications. It comprises of the Java compiler and debugger and for someone looking to create Java apps, you must have JDK installed. As for Java Runtime Environment(JRE), it is required for anyone looking to run Java applications on their system. So, if you only want to run Java applications without building them, you only need to install JRE and not the JDK. As a programmer, you will likely develop and run Java applications. Therefore, you must install JDK and JRE for everything to work correctly. How to Install Java on Ubuntu 24.04 Installing Java only requires access to an internet connection. Again, when you install the JDK, it should install the default JRE by default. However, that’s not always the case. Besides, if you want a specific version, you can specify it when running the install command. Here, we’ve provided the steps to follow to install Java quickly. Take a look! Step 1: Update Ubuntu’s Repository Updating the system repository ensures that the package you install is the latest stable version. The update command refreshes the sources list, and when you install Java, you will have the updated source index for the latest version. $ sudo update Step 2: Install Default JRE Before we can start installing Java, first verify that it isn’t already installed on your Ubuntu 24.04 by checking its version with the following command. $ java --version If Java is installed, you will get its version displayed on the output. Otherwise, you will get an output showing ’Java’ not found. Otherwise, install the default JRE using the below command. $ sudo apt install default-jre The installation time will depend on your network’s speed. Step 3: Install OpenJDK After successfully installing JRE, you are ready to install OpenJDK. Here, you can choose to install the default JDK, which will install the available version. Alternatively, you can opt to install a specific JDK version depending on your project requirements. For instance, if we want to install OpenJDK 17, we would execute our command as follows. $ sudo apt install openjdk-21-jdk During the installation process, you will get prompted to confirm a few things. Press ’y’ and hit the enter key to proceed with the installation. Once the installation is complete, you will have Java installed on your Ubuntu 24.04 and ready for use. The last task is to verify that Java is installed. By checking the version, you will get an output showing which version is installed. If you want a different version, ensure you specify it in the previous commands, as your project requirements could be different. $ java --version For our case, the output shows that we’ve installed Java v21.0.3 . Conclusion Installing Java on Ubuntu 24.04 isn’t a complicated process. However, you must know what your project requirements are to guide which version you install. To recap, installing Java requires you to first update the repository. Next, install JRE and then specify what OpenJDK version to install. You will have managed to install Java on Ubuntu 24.04, and this post shares more details on each step.
  22. The Node Package Manager (NPM) is a tool that allows developers to install and work with different JavaScript packages efficiently. Installing NPM involves installing Node.js, and this post shares all the insights you need to install NPM.Node.js is a suitable option for anyone looking to have a scalable backend that utilizes JavaScript. Node.js is built on Chrome’s V8 JS engine, and you can easily install it on your Ubuntu 24.04 to start powering your backend functionality in your projects. We will focus on understanding three options for installing NPM on Ubuntu 24.04. Method 1: Install NPM on Ubuntu 24.04 via APT You can find the NPM and Node.js from the Ubuntu repository. If you don’t need any specific Node.js version for your project, you can utilize this option to install NPM and Node.js with the below commands. First, run the update command. $ sudo apt update Next, source Node.js from the default repository and install it using the command below. $ sudo apt install nodejs At this point, you have Node.js installed, and you can verify the installed version using the command below. $ node -v To install NPM, run the following command. $ sudo apt install npm Verify that NPM is installed by checking its version. $ npm --version We have npm v9 for our case. You can now comfortably start working on your Node.js project, and with NPM installed, you have room to install any dependencies or packages. That’s the first option of installing NPM and Node.js on Ubuntu 24.04. Method 2: Install NPM Using NodeSource PPA When you install the NodeSource package, it will install NPM without you needing to install it separately. This method allows you to install a specific Nodejs package provided you’ve correctly added the PPA by downloading it using wget or curl. Start by visiting the Nodejs project to see which version you want to install. Once you decide on the version, retrieve it using curl as in the following command. For our example, we’ve retrieved version 20.x. $ curl -sL https://deb.nodesource.com/setup_20.x -o nodesource_setup.sh The script will get saved in your current directory, and you can verify it using the ls command. The next step is to run the script, but before that, you can open it with a text editor to confirm that it is safe to execute. You can then run the script using bash with the following command. $ sudo bash nodesocurce_setup.sh The command will add the NodeSource PPA to your local package, where you can source and install Node.js. When the script completes executing, you will get an output confirming the PPA has been added, and it will display the command you should use to install Node.js. Note that before installing the Node.js package, if you have already installed it using the previous method, it’s best to uninstall it to avoid running into an error. To do so, use the below command. $ sudo apt autoremove nodejs npm To install the Nodejs package, which will also install NPM, run the following command. $ sudo apt install nodejs Your system will source the package from the local package where we added the PPA. It will then proceed to install the NodeSource package version that you downloaded. Once the installation is complete, check its version using the following command. $ node -v The output will display the node version you downloaded, which is v20.12.2 for our case. Still, if we check the installed NPM version, you will notice it’s different from what we had earlier. $ npm --version The PPA installed NPM v10.5.0, which is higher than what we installed in method one earlier. Conclusion For anyone looking to use NPM and Node.js to power their backend application, this post shares two different methods for seamlessly installing NPM. This allows you to run your Node.js and install packages effectively. You can install NPM from the default Ubuntu 24.04 repository or add its PPA from the Node Source project, which will automatically install NPM alongside Node.js.
  23. As the name suggests, grep or global regular expression print lets you search for specific text patterns within a file’s contents. Its functionalities include pattern recognition, defining case sensitivity, searching multiple files, recursive search, and many more. So whether you’re a beginner or a system administrator, knowing about the grep command to locate the files efficiently is good. This tutorial will explain how to use grep in Linux and discuss its different applications. How To Use Grep Command in Linux The basic function of the grep command is to search for a particular text inside a file. You can do that by entering the following command: grep "text_to_search" file.txt Please replace ‘text_to_search’ with the text you want to search for and ‘file.txt’ with the target file. For example, to find the string “Hello” in the file named file.txt, we will use: grep "Hello" file.txt On entering the above command, grep will scan the Intro.txt file for “Hello.” As a result, it shows the output of the whole line or lines containing the target text. If the target file is on a path different from your current directory, please mention that path along with the file name. For instance: grep "Hello" ~/Documents/file.txt Here, the tildes ‘~’ mark represents your home directory. The above example shows how you can search for a piece of text in a single file. However, if you want to do the same search on multiple files at once, mention them subsequently in one grep command: grep "Hello" file.txt Linux_info.txt Password.txt In case you’re not sure about your string’s cases(uppercase or lowercase), perform a case-insensitive search by using the i option: grep -i "hello" Intro.txt Although the string we input was not the exact match, we received accurate results through the case-insensitive search. In case you want to invert the changes and check files that don’t contain the specific pattern, then please use the v option: grep -v "Hello" file.txt Linux_info.txt Password.txt Moreover, if you want to display the lines that start with a certain word, use the ‘^’ symbol. It serves as an anchor that specifies the beginning of the line. grep "^Hello" file.txt The above commands will only be useful when you know which file to search. In this case, you can recursively search the string inside the whole directory using the r option. For example, let’s search “Hello” inside the Documents directory: grep -r "Hello" ~/Documents Furthermore, you can also count the number of times the input string appears in a file through the c option: grep -c "Hello" Intro.txt Similarly, you can display the line numbers along with the matched lines with the n option: grep -n "Hello" Intro.txt A Quick Wrap-up Users often remember that a file used to contain a piece of text but forget the file name, which can land them in deep trouble. Hence, this tutorial was about using the grep command to search for text in a file’s contents. Furthermore, we have used different examples to demonstrate how you can tweak the grep command’s functioning with a few options. You can experiment by combining multiple options to find out what suits best according to your use case.
  24. Linux works well as a multiuser operating system. Many users can access a single OS simultaneously without interpreting each other. However, if others can access your directories or files, the risk may increase. Hence, from a security perspective, securing the data from others is essential. Linux has features to control access from permissions and ownership. The ownership of files, folders, or directories is categorized into three parts, which are: User (u): This is the default owner, also called the file’s creator. Group (g): It is the collection of multiple users with the same permissions to access folders or files. Other (o): Those users not in the above two categories belong to it. That’s why Linux offers simple ways to change file permissions without hassles. So in this quick blog, we have included all the possible methods to change file permissions in Linux. How to Change File Permissions in Linux In Linux, mainly Linux file permissions are divided into three parts, and these are: Read (r): In this category, users can only open and read the file and can’t make any changes to it. Write (w): Users can edit, delete, and modify the file content with written permission. Execute (x): When the user has this permission, they can execute the executable script and access the file details. Owner Representation Modify permission using the operator Permission symbols for symbolic mode Permission symbols for absolute mode User → u To add use ‘+’ Read → r To add or subtract read use ± 4 Group → g To subtract use ‘-‘ Write → w To add or subtract read use ± 2 Other → o To set use ‘=’ Execute → x To add or subtract read use ± 1 As you can see from the above table, there are two types of symbol representation of permission. You can use both of these modes (symbolic and absolute) to change file permissions using the chmod command. The chmod refers to the change mode that allows users to modify the access permission of files or folders. Using chmod Symbolic Mode In this method, we use the symbol (for owner- u, g, o; for permission- r, w, x) to add, subtract, or set the permissions using the following syntax: chmod <owner_symbol> mode <permission_symbol> <filename> Before changing the file permission, first, we need to find the current one. For this, we use the ‘ls’ command. ls -l Here the permission symbols belong to the following owner: ‘-‘ : shows the file type. ‘rw-‘ : shows the permission of the user (read and write) ‘rw-‘ : shows the permission of the group(read and write) ‘r- -‘ : shows the permission of others (read) In the above image, we highlighted one file in which the user has read and write permission, the group has read and write permission, and the other has only read permission. So here, we are going to add executable permission to others. For this, use the following command: chmod o+x os.txt As you can see, the execute permission has been added to the other category. Simultaneously, you can also change the multiple permissions of different owners. Following the above example, again, we change the permissions in it. So, here, we add executable permission from the user, remove write permission from the group, and add write permission to others. For this, we can run the below command: chmod -v u+x ,g-w,o+w os.txt Note: Use commas while separating owners, but do not leave space between them. Using chmod Absolute Mode Similarly, you can change the permission through absolute mode. In this method, mathematical operators (+, -, =) and numbers represent the permissions, as shown in the above table. For example, let’s take an example and the updated permission of the file data is as follows: Mathematical representation of the permission: User Read + Write Permission is represented as 665 4+2=6 Group Read + Write 4+2=6 Other Read + Execute 4+1=5 Now, we are going to remove read permission from the user and others, and the final calculation is: User Read + Write -Read (-4) Updated permission is represented as 261 4+2=6 6-4=2 Group Read + Write – 4+2=6 6 Other Read + Execute -Read (-4) 4+1=5 5-4=1 To update the permission, use the following chmod command: chmod -v 261 os.txt Change User Ownership of the File Apart from changing the file permission, you may also have a situation where you have to change the file ownership. For this, the chown is used which represents the change owner. The file details represent the following details: <filetype> <file_permission> <user_name> <group_name> <file_name> So, in the above example, the owner’s or user name is ‘prateek’, and you can change the user name that only exists on your system. Before changing the username, first list all the users using the following command: cat /etc/passwd Or awk -F ':' '{print $1}' /etc/passwd Now, you can change the username of your current or new file between these names. The general syntax to change the file owner is as follows: sudo chown <new_username> <filename> Note: Sudo permission is required in some cases. Based on the above result, we want to change the username from ‘prateek’ to ‘proxy.’ To do this, we run the below command in the terminal: sudo chown proxy os.txt Change Group Ownership of the File First, list all the groups that are present in your system using the following command: cat /etc/group | cut -d: f1 The ‘chgrp’ command (change group) changes the filegroup. Here, we change the group name from ‘prateek’ to ‘disk’ using the following command: sudo chgrp disk os.txt Conclusion Managing file permissions is essential for access control and data security. In this guide, we focused on changing the file permissions in Linux. It has a feature through which you can control ownership (user, group, others) and permissions (read, write, execute). Users can add, subtract, or set the permissions according to their needs. Users can easily modify the file permissions through the chmod command using the symbolic and absolute methods.
  25. Operating systems use packets for transferring the data on a network. These are small chunks of information that carry data and travel among devices. Moreover, when any network problem arises, packets aid in identifying the root cause of the underlying problem. How? By tracing the route of those packets. The traceroute command in Linux helps you map the path packets take while traveling to a specific destination. This further helps you troubleshoot network latency, packet loss, network hops, DNS resolution issues, slow website access, and more. So, in this blog, we will explain simple ways to use the traceroute command in Linux. How To Use Traceroute Command in Linux Firstly, the traceroute does not come preinstalled in many Linux distributions. However, you can install it by executing one of the below command according to your system: Operating System Command Debian/Ubuntu sudo apt install traceroute Fedora sudo dnf install traceroute Arch Linux sudo pacman -Sy traceroute openSUSE sudo zypper install traceroute After installation, you can implement the traceroute command by entering: traceroute <destination_IP> Replace <destination_IP> with the device’s IP address at the destination. Once you run the command, your system will display the list of hops with the IP address and response time. Hops are the devices that your packets go through while traveling to a specific destination. For example, let’s use the traceroute command for Google’s IP address: traceroute 8.8.8.8 The result shows only one hop while marking others as an asterisk(*). This happens because the subsequent hops did not respond within the timeout period of 3 seconds. Moreover, the traceroute command, by default, uses DNS resolution to get the hostnames of hops, which slows down the process. You can omit that part and guide it to display only the IP addresses by using the -n option: traceroute -n <destination_IP> If you want to limit the number of hops, use the -m option along with the traceroute command: traceroute -m N <destination_IP> Here, put the desired number of hops in place of N. On execution, it will return only N number of hops in the results. The traceroute command only displays every hop’s round-trip time(RTT). However, you can get more detailed timing information with the -I option: traceroute -I <destination_IP> This command sends an ICMP echo request to retrieve more accurate RTT data. For instance, retake the example of Google: Tip: If your specified destination restricts ICMP packets, you can instead trace the UDP packets by employing the -U option: traceroute -U <destination_IP> In case you want to explore more options for traceroute, then please run the below command: traceroute --help A Quick Wrap-up Traceroute is an amazing CLI utility that you can use to diagnose network-related issues in Linux. It traces the path of packets to identify all the critical issues of the network. Hence, We have explained every single detail about the traceroute command with the help of some examples.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.