Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

  • Entries

    74
  • Comments

    0
  • Views

    1991

Entries in this blog

by: Abhishek Prakash
Fri, 30 May 2025 17:30:45 +0530


An interesting development came from Microsoft as it released a new terminal-based editor with open source license.

I kind of liked it at first glance until I tried my hands on a shell script written in this editor and then I ran into:

MS Edit adds Windos line ending

The issue is that it added the classic Windows-style line endings, which is not liked by UNIX-like systems.

I knew it was too good to be true to have something perfect for Linux from Microsoft 🤦

Here are the highlights of this edition :

  • Open-source notification Inbox infrastructure
  • Listmonk newsletter
  • Historical view of system resource utilization
  • Bang bang... shebang
  • And memes, news and tools to discover

🚀 Level up your coding skills and build your own bots

Harness the power of machine learning to create digital agents and more with hot courses like Learning LangChain, The Developer's Playbook for Large Language Model Security, Designing Large Language Model Applications, and more.

Part of the purchase goes to Code for America! Check out the ebook bundle here.

Humble Tech Book Bundle: Machine Learning, AI, and Bots by O’Reilly 2025
Master machine learning with this comprehensive library of coding and programming courses from the pros at O’Reilly.
by: Adnan Shabbir
Wed, 28 May 2025 05:55:55 +0000


Ubuntu 25.04, codenamed Plucky Puffin, released in April 2025, is an interim release supported for 9 months (until Jan 2026). Ubuntu 25.04 is equipped with experimental features that will be tested until the next LTS, 26.04, and if declared stable, these features will be carried forward and may be part of Ubuntu 26.04, the next Ubuntu LTS in line.

In today’s guide, I’ll give you a brief overview of Ubuntu 25.04, what it looks like, and what other features are included in the development and testing.

Outline:

What’s New in Ubuntu 25.04 Codenamed Plucky Puffin?

With every interim release (just like Ubuntu 25.04), there comes a list of new features to be tested and some improvements to existing functionalities. This time we are focusing on Ubuntu 25.04, some major as well as minor updates will be provided:

GNOME 48

Ubuntu 24.04 is based on GNOME 46, whereas at the moment of writing this post, Ubuntu 25.04 is being experimented with GNOME 48. As of now, GNOME 48 is more modern and graphics-friendly, which is always, i.e., the latest version is supposed to overcome the deficiency of the previous GNOME version and improve over time.

Kernel 6.14

The kernel is the central nervous system of Linux, i.e., a bridge between the hardware and the software. Ubuntu 25.04 comes with a Kernel 6.14 (Upstream), i.e., developed and maintained by Linus Torvalds and the Linux kernel maintainers.

The first official release of Ubuntu 24.04 contained the Kernel 6.8. However, Ubuntu 24.04.2 is now updated to the Linux Kernel 6.11.

Security Center

Although Ubuntu is an open-source OS and is more secure than other OSs. However, to align with this top-tech era, Ubuntu might be seeking some additional application support. These applications require some permissions that a user has to give for smooth functionality. To deal with such permissions, Ubuntu has released a Security Center in this release, so that users may turn on or off the permissions.

Here’s the initial interface of the Security Center, where you can see that the feature is experimental at the moment.

If you enable it, the strictly confined apps request permissions. The app permissions can be checked in the settings, i.e., “Settings > Apps

Updated apt Installation Interface

An interactive UI for the apt-based installations and uninstallations:

Uninstalling:

Well-Being

This is about the well-being of the Ubuntu lovers. The users can enable it and set the following features:

  • Screen Time: Set the average screen time usage.
  • Eyesight Reminder: A reminder to look away from the screen.
  • Movement Reminder: A reminder to move around.

Document Viewer (PDF Viewer)

Ubuntu 25.04 is now equipped with a built-in Document Viewer for various types of files. You can open a variety of files on this viewer, i.e., PDF, comic books, DjVU, and TIFF documents. Here’s the document viewer:

HDR Display – High Dynamic Range

High Dynamic Range (HDR) is a state-of-the-art technology to provide better display with advanced coloring sense of the HDR technology. This is one of the significant additions in this update list. If you have an HDR monitor, now, you can attach it to your Ubuntu 25.04 to experience HDR displays.

Head over to “Settings > Display > HDR” to manage it.

Other Notable Updates in Ubuntu 25.04

Color to Color Management:

The Color section in the “Settings” has been replaced with Color Management in the Settings.

Timezone in Events:

Ubuntu 25.04 provides timezone support while creating events in the calendar. Here’s how you can locate it in the Calendar app of Ubuntu 25.04:

JPEG XL Image Support:

JPEG XL is an image type (an enhanced JPEG), and now it is supported by Ubuntu and providing a better experience for the users.

Notification Grouping:

Ubuntu 25.04 has now offered a notification grouping inside the notifications, making it easier for users to navigate through multiple notifications.

Yaru Theme:

The icon and theme experience is better than the previous releases. The icons are now more dynamic and are well integrated with the accent color support.

Updated Network Manager:

Ubuntu 25.04 has an updated Network Manager 1.52, whereas Ubuntu 24.04.2 (released parallel to Ubuntu 25.04) has 1.46. The significant change is that Network Manager 1.52 is more aligned towards IPv6 as compared to the 1.46 version.

Chrony (Network Time Protocol):

Ubuntu 25.04 has adopted Chrony as its Network Time Protocol client (SUSE and RHEL inspired), which synchronizes the system time as per the NTP servers, i.e., a GPS receiver.

Until now, Ubuntu has been using “systemd-timesync” as its Network Time Protocol client, which is also known as SNTP (Simple Network Time Protocol). The SNTP synchronizes the system clock with the remote server and has less accuracy when compared with Chrony (Full NTP).

  • What is NTP? The purpose of the NTP is to synchronize the clocks of the systems over a network to ensure the security, performance, event coordination, and logging. NTP ensures the time sync is as accurate as possible, i.e., in milliseconds / submilliseconds.

Developer Tools and Libraries:

Since Ubuntu is well-liked in the developer community, the Ubuntu contributors continuously work on providing updated tools. Ubuntu 25.04 is equipped with updated tools, i.e., Python, GCC, Rust, and Go.

Similarly, a few of the developers associated libraries are also upgraded, i.e., glibc, binutils, and OpenSSL.

Gaming Support (NVIDIA Dynamic Boost):

The NVIDIA dynamic boost enables the gamer to manage the power between the CPU and GPU. This is now enabled by default in Ubuntu 25.04.

System Monitor’s Interface:

Ubuntu’s system monitor shows information about the processes, resources, and file systems. In Ubuntu 25.04, there is a slight change in the interface of the Ubuntu System Monitor. For instance, the info inside the processes tab is restricted to, i.e., ID, CPU, Memory, Disk Write, and Disk Read. Here’s the interface where you can see this.

However, in older versions, the Processes tab has some additional info for each process:

That’s all from the notable features of Ubuntu 25.04.

Would you like to upgrade your Ubuntu to Ubuntu 25.04?

How to Upgrade to Ubuntu 25.04 Plucky Puffin

If you are using any other release of Ubuntu (Ubuntu 25.10 or Ubuntu 24.04), you can easily upgrade to Ubuntu 25.04. Let’s go through the steps to upgrade:

Important Note: If you are using a Ubuntu LTS release other than Ubuntu 24.04, then you have to first upgrade to Ubuntu 24.04:

Once upgraded to Ubuntu 24.04, you are now ready to follow the steps below and upgrade to Ubuntu 24.10.

Step 1: Upgrade Your Ubuntu to Ubuntu 24.10

Since it is an interim release, you must have the previous release installed to get Ubuntu 25.04. Here’s how you can upgrade to Ubuntu 24.10:

  • Update and upgrade the system repositories:
sudo apt update && sudo apt upgrade

Note: It is recommended to use “sudo apt autoremove” after update/upgrade, to clean up the system from any useless dependencies/packages that are not required.

  • If you are using Ubuntu 24.04 LTS, then you have to enable the non-LTS release upgrade. For that, open the release-upgrader file in an editor:
sudo nano /etc/update-manager/release-upgrades

Now, change the “Prompt” parameter’s value from “lts” to “normal”, as can be seen below:

  • Start the upgrade using the do-release-upgrade command:
sudo do-release-upgrade

Here you go:

Press “y” to proceed with the installation:

While upgrading, you will be prompted several times asking for acceptance of the changes being processed:

Step 2: Upgrade to Ubuntu 25.04

Once you are in Ubuntu 24.10, use the do-release command again to upgrade to Ubuntu 25.04:

sudo do-release-upgrade

Note: If you get any prompt like “please install all updates available for your release”, then use the command “sudo apt dist-upgrade” and reboot to fix it.

Here’s the Ubuntu 25.04:

That’s all from this guide.

Conclusion

Ubuntu 25.04, codenamed “Plucky Puffin”, is an interim Ubuntu release supported for 9 months. Ubuntu 25.04, released in April 2025, features the updated GNOME (48), updated Kernel (6.14), an improved apt version (3.0), and a security center. Other features include the HDR display, enhanced color management, timezone support in events, etc.

This post briefly lists the notable features of Ubuntu 25.04 and also explains the process to upgrade to Ubuntu 25.04.

FAQS

How Long Will Ubuntu 25.04 be Supported?

Ubuntu 25.04 will be supported until January 2026. Since Ubuntu 25.04 is an interim release and an Ubuntu interim release is supported for 9 months after its release.

Is Ubuntu 25.04 LTS?

No, Ubuntu 25.04 is an interim release, not an LTS. The current latest LTS is Ubuntu 24.04 codenamed Noble Numba, and the next in line LTS is Ubuntu 26.04.

How to Upgrade to Ubuntu 25.04?

First, upgrade to Ubuntu 24.04, then to 24.10, and from there, you can upgrade to Ubuntu 25.10.

Blogger

YAML Validator

by: Abhishek Prakash
Tue, 27 May 2025 21:57:07 +0530


Paste your YAML content or upload a file to validate syntax. Scroll down to see the details on the errors, if any.

YAML Validator Tool

YAML Input

by: Abhishek Kumar
Mon, 26 May 2025 14:31:56 +0530


I see a lot of posts on my Twitter (or X) feed debating the merits of ditching cloud services in favor of homelab self-hosted setups just like I tried hosting Wikipedia and the Arch wiki. Some even suggest using bare-metal servers for professional environments.

Source: Fireship on X

While these posts often offer intriguing tools and perspectives, I can't help but notice a pattern: companies lean heavily on cloud services until something goes catastrophically wrong, like Google Cloud accidentally deleting customer data.

Source: Ars Technica

However, let’s be real, human error can occur anywhere. Whether in the cloud or on-premises, mistakes are universal.

So, no, I’m not here to tell you to migrate your production services to a makeshift homelab server and become the next Antoine from Silicon Valley.

But if you’re wondering why people even homelab in the era of AWS and Hetzner, I’m here to make the case: it’s fun, empowering, and yes, sometimes even practical.

1. Cost control over time

Cloud services are undeniably convenient, but they often come with hidden costs. I still remember during my initial days here at It's FOSS, and Abhishek messaged me reminding me to delete any unused or improperly configured Linode instances.

That’s the thing with cloud services, you pay for convenience, and if you’re not meticulous, it can snowball.

Source: Mike Shoebox on X

A homelab, on the other hand, is a one-time investment. You can repurpose an old PC or buy retired enterprise servers at a fraction of their original cost.

Sure, there are ongoing power costs, but for many setups, especially with efficient hardware like Raspberry Pi clusters, this remains manageable.

I'll take this opportunity to share my favorite AWS meme.

AWS bill meme

2. Learning and experimentation

If you’re in tech, be it as a sysadmin, developer, or DevOps engineer, having a homelab is like owning a personal playground.

Want to deploy Kubernetes? Experiment with LXC containers? Test Ansible playbooks? You can break things, fix them, and learn along the way without worrying about production outages or cloud charges.

For me, nothing beats the thrill of running Proxmox on an old Laptop with a Core i5, 8 GB of RAM, and a 1 TB hard drive.

It’s modest (you might've seen that poor machine in several of my articles), but I’ve used it to spin up VMs, host Docker containers, and even test self-hosted alternatives to popular SaaS tools.

3. Privacy and ownership

When your data resides in the cloud, you trust the provider with its security and availability. But breaches happen, and privacy-conscious individuals might balk at the idea of sensitive information being out of their direct control.

With a homelab, you own your data. Want a cloud backup? Use tools like Nextcloud. Need to share documents easily? Host your own FileBrowser. This setup isn’t just practical, it’s liberating.

Sure, there’s a learning curve and it could be steep for many. Thankfully, we also have plug-and-play solutions like CasaOS, which we covered in a previous article. All you need to do is head to the app store, select 'install,' and everything will be taken care of for you.

4. Practical home uses

Homelabs aren’t just for tech experiments. They can serve real-world purposes, often replacing expensive commercial solutions:

  • Media servers: Host your own movie library with Jellyfin or Plex. No subscription fees, no geo-restrictions, and no third-party tracking, as long as you have the media on your computer.
  • Home security: Set up a CCTV network with open-source tools like ZoneMinder. Add AI-powered object detection, and you’ve built a system that rivals professional offerings at a fraction of the cost.
  • Family productivity: Create centralized backups with Nextcloud or run remote desktop environments for family members. You become the go-to tech person for your household, but in a rewarding way.

For my parents, I host an Immich instance for photo management and a Jellyfin instance for media streaming on an old family desktop. Since the server is already running, I also use it as my offsite backup solution, just to be safe. 😅

📋
If you are into self-hosting, always make multiple instances of data backup in different locations/systems/medium. Follow the golden rule of 3-2-1 backup.
9 Dashboard Tools to Manage Your Homelab Effectively
See which server is running what services with the help of a dashboard tool for your homelab.

What about renting a VPS for cloud computing?

I completely understand that renting a VPS can be a great choice for many. It offers flexibility, ease of use, and eliminates the need to manage physical hardware.

These days, most VPS providers like AWS, Oracle, and others offer a 1-year free tier to attract users and encourage them to explore their platforms. This is a fantastic opportunity for beginners or those testing the waters with cloud hosting.

I’ve also heard great things about Hetzner, especially their competitive pricing, which makes them an excellent option for those on a budget.

In fact, I use Nanode myself to experiment with a DNS server. This setup spares me the hassle of port forwarding, especially since I’m behind CGNAT.

If you’re interested in hosting your own projects or services but face similar limitations, I’ve covered a guide on Cloudflare Tunnels that you can consult, it’s a handy workaround for these challenges.

Personally, I believe gatekeeping is one of the worst things in the tech community. No one should dictate how or where you host your projects. Mix and match! Host some services on your own systems, build something from scratch, or rent a VPS for convenience.

Just be mindful of what you’re hosting like ensuring no misconfigured recursive function is running loose and keep exploring what suits your needs best.

My journey into homelab

My journey into homelabbing and self-hosting started out of necessity. When I was in university, I didn’t have the budget for monthly cloud server subscriptions, domain hosting, or cloud storage.

That limitation pushed me to learn how to piece things together: finding free tools, configuring them to fit my needs, diving into forum discussions, helping others solve specific issues, and eagerly waiting for new releases and features.

This constant tinkering became an endless cycle of learning, often without even realizing it. And that’s the beauty of it. Whether you’re self-hosting on a VPS or your homelab, every step is a chance to explore, experiment, and grow.

So, don’t feel constrained by one approach. The freedom to create, adapt, and learn is what makes this space so exciting.

Wrapping up

At the end of the day, whether you choose to build a homelab, rent a VPS, or even dabble in both, it’s all about finding what works best for you. There’s no one-size-fits-all approach here.

For me, homelabbing started as a necessity during my university days, but over time, it became a passion, a way to learn, experiment, and create without boundaries.

Sure, there are challenges, misconfigured services, late nights debugging, and the occasional frustration, but those moments are where the real learning happens.

Renting a VPS has its perks too. It’s quick, convenient, and often more practical for certain projects.

I’ve come to appreciate the balance of using both approaches, hosting some things locally for the sheer fun of it and using VPS providers when it makes sense. It’s not about choosing sides; it’s about exploring the possibilities and staying curious.

If you’re someone who enjoys tinkering, building, and learning by doing, I’d encourage you to give homelabbing a try. Start small, experiment, and let your curiosity guide you.

And if you prefer the convenience of a VPS or a mix of both, that’s perfectly fine too. At the end of the day, it’s your journey, your projects, and your learning experience.

So, go ahead, spin up that server, configure that service, or build something entirely new. The world of self-hosting is vast, and the possibilities are endless. Happy tinkering!

by: Abhishek Prakash
Fri, 23 May 2025 18:40:52 +0530


Linux doesn't get easier. You just get better at it.

This is what I always suggest to beginners. The best way to learn Linux (or any new skill for that matter) is to start using it. The more you use it, the better you get at it 💪

Here are the highlights of this edition :

  • Master splitting windows in Vim
  • Essential YAML concepts
  • Checkcle
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

🚀 Self-hosting on autopilot

PikaPods allows you to quickly deploy your favorite open source software. All future updates and backups are handled automatically by PikaPods while you enjoy using the software.

Wondering if you should rely on it? You get $5 free credit, so try it out and test it yourself.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
by: Adnan Shabbir
Thu, 22 May 2025 08:59:25 +0000


The awk command is not just a command; it’s a scripting language, just like bash. The awk command is used for advanced pattern scanning, data extraction, and text manipulation. Because of its scripting support, it is useful for Linux power users, whether an administrator, a developer, or a Linux enthusiast. For instance, a system administrator can swiftly examine, i.e., log processing/analysis, tracking network IPs, generating reports, and monitoring system tasks.

If you are looking to have a strong grip on awk technically, this guide provides a brief tutorial on the awk command, including its use cases with examples.

awk Linux Command

The awk command contains programming language features. Because of its diverse use cases, awk has left behind its command-line competitors, i.e., grep, cut, etc. So, before diving further into the awk command examples, let’s first discuss the following:

How is the awk command better than grep, cut, and other competitors?

Let’s quickly have a look at why awk beats grep, cut, and other competitors.

  • The awk command has built-in variables that are used silently by default. For instance, the “FS” allows you to split the field with some character, and you can access and perform operations on them.
  • The awk supports loops, variables, conditions, arrays, and functions, which are not supported by any of the awk’s competitors.
  • The data can be presented beautifully using the built-in variables that other commands do not have.

Syntax of the awk Command

awk 'pattern {action}' file

The quotes on this command are optional. When using multiple patterns, actions, you need to put the quotes; otherwise, they are not necessary to be used every time. Here’s the further breakdown of the awk:

The “file” is the name of the file on which the “awk” is being applied. The ‘pattern {action}’ is the core part of awk:

  • pattern”: represents a condition, i.e., can be a regex, usually left blank to match all the lines.
  • action”: The operation to be executed after a successful pattern match.

For What Operations can you use the awk Command

The awk scripting language is broadly used in text processing. Let’s discuss what text processing operations can be performed using awk:

  • Pattern scanning and processing.
  • Text filtration, transformation, Editing/Modifying.
  • Extracting data from fields or fields as a whole.
  • File processing, i.e., single, multiple, supports different types of file processing
  • Formatted output, i.e., getting output in another file, and print support
  • Mathematical calculations, i.e., using conditions, looping support
  • Controlled operation on the data, i.e., BEGIN, END
  • System monitoring, i.e., using awk with other Linux commands to get the filtered results.

That’s all from understanding the basics of the awk command. Now, let’s dig into the examples to learn practically.

awk Command Examples

The awk examples provided here were experimented on two files, “students.txt” and “class.txt”.

Content of the “students.txt”:

Content of the “class.txt”:

Let’s start with the basic use cases.

Print with awk

Print is the primary operation of awk command. It can be done anytime and in any form to print the data.

Example 1: Print All (in a single or multiple file)

Use the print in the “{action}” part to print the content of the file:

awk '{print}' file

Example 2: Print Specific Columns

Write the column number in place of the “n” to show the data of that column only:

awk '{print $n}' file

Example 3: Print a Specific Range of Data from the File

You can specify the column range to retrieve only that specific data:

awk 'NR==4, NR==8' file

Example 4: Update any Field’s Value and Print it

You need to specify the field that you want to update and its value. Then, you can print it or store the output in some other file.

Here’s a demonstration:

awk '{$3 = "New Value"; print}' file

Just updates on the terminal, the original content remains the same.

Example 5: Printing From Multiple Files

The awk command can be applied to multiple files. Here’s how it works:

awk '{print $1}' file1 file2

The command will print the first column/field of two different files:

Matching a Specific Expression | awk With Regular Expressions

Expressions are used with the “awk” command to match a specific pattern or a word in the file and perform any operation on it.

Example 6: Matching the Exact Word

The command below matches the expression and performs the operation based on that match. Here’s a simple example:

awk '/Software/ {print}' file

The command checks for the “Software” word and prints all the data records that contain the “Software” word.

Example 7: Match the Start of the Character

The “carrot” symbol is used to match the character with the start of the files in the file and then further operations can be performed. Here’s the practical:

awk '/^110/ {print}' file

Printing the lines that start with “110”:

Example 8: Match the Endpoint Character

Likewise, the end character can also be specified to match and print (or perform any other operation):

awk '/Year$/ {print}' file

The above command matches the “Year” and prints all the records that contain the word “year”.

Bonus: Other Regex Characters

The following are the rest of the Regex operators that can be used with awk to get more refined output:

Regex Characters Description
[ax] Picks only a and x characters
[a-x] Picks all the characters that are in the range of a-x.
\w Selects a word
\s Blank Space character
\d Selects a digit.

Formatting the Data With awk | awk Variables ( NR, NF, FS, FNR, OFS, ORS)

The awk variables are the built-in variables to process and manipulate the data. Each variable has its own purpose and refers to a record or a field. Let me first introduce all these variables to you:

NR (Number of Records):

The number of records being processed, i.e., if a file has 10 records (rows), then the NR’s value will range from 1 to 10.

Example 9: Print the Number of Records

Printing the record number of the data in a file:

awk '{print NR, $2}' filename

Example 10: Getting the Disk Usage Report

Based on NR, the administrator can analyze the disk usage report:

df -h | awk 'NR>1 { print $1, $5 }'

The command gets the input from the df -h command and then pipes it with awk to get the filesystem name ($1) and the percentage ($5) used by each filesystem.

Similarly, other resources’ performance and progress can also be checked using the NR in awk.

NF (Number of Fields):

Denotes the number of fields in each record.

Example 11: Getting Number of Fields in a File

Let’s see it through the following:

awk '{print NF}' file

The command prints the “NF” number of fields in each record of the target file:

FS (Field Separator):

This is the character used to separate the fields. It is a white space by default and a comma for an Excel file:

Example 12: Printing the field separator

awk{print FS, $4}file

This command prints the field separator:

FNR (File Number of Record):

Counting the number of records for each file when multiple files are being processed. For instance, when a single file is being processed, the NR value always starts from 1 and continues this number when multiple files are being processed, whereas the FNR value starts from 1 (for the new files) instead of continuing as NR.

Example 13: Printing the Field Number Record with a Field

awk '{print FNR, $4}' file

The command prints the field number and the 4th field:

OFS (Output Field Separator):

This is the Output field separator, i.e. the character separating the output fields. The default output field separator is a space ( “ ” ). However, you can change or set a new field separator with the help of the OFS keyword:

Example 14: Changing the Output Field Separator

Let’s understand it practically:

awk 'BEGIN {OFS=" - "} {print $3, $4}' file

The command will print the 3rd and 4th columns from the specified file and will set the “” as the new OFS:

ORS (Output Record Separator):

Likewise, OFS, this ORS represents the Output Record Separator. It is the space by default, however, you can change it as we are going to show you here practically.

Example 15: Changing the Output Record Separator

The following command will change the record separator to “|”:

awk '{ORS =" | "} {print}' filename

The record separator is now set to “ | ” for this specific output only.

Advanced awk Examples

Until now, we have gone through some basic and intermediate use cases of the awk example. Since awk also incorporates programming language features, so, here we’ll also discuss awk’s functionality with some advanced use cases:

Example 16: Find the Largest Field in a File

The following command uses the if-else statement to give you the length of the longest line in the file. Here’s an example to do so:

awk '{if (length($0) > max) max = length($0)} END {print max}' file
  • The “awk” is the command keyword, and the “file” on which the operation is being performed. The rest of the details are in quotes.
  • The length($0) expression gets the length of the current line and checks if it is greater than “max”.
  • If the condition is true, the “length($0)” is stored in “max” and the “max” is printed at the end.

Similarly, if you want to check/get the minimum length of a line, then it would be:

awk '{if (length($1) > min) min = length($1) } END {print min}' file

Example 17: Get the Sum of the Field Values

With awk, you can calculate the sum of any field. Here’s the practical demonstration:

awk '{sum += $1} END {print "Total:", sum}' file
  • A sum variable initially stores the first value from the first field ($1)
  • The $1 (first field) values are being added to the already stored values in the sum.

Example 18: Finding a Max Value in a Column

Here, m is the variable that stores the maximum value:

awk 'BEGIN{m=0} {if($2 > m) m=$1} END {print "Maximum Value:", m}' file

Here’s the breakdown of the command:

  • A variable “m” is initialized, then, if the value of the “2nd column” is greater than m, the “2nd column” value is stored in m.
  • This continues until the condition “$2 > m” becomes false.

Example 19: Count Specific Occurrences of a Word
s is the looping variable, f shows the frequency, and “Year” word to be searched for the number of entries:

awk '{for(s=1;s<=NF;s++) f[$s]++} END {for(Year in f) print Year, f[Year]}' file
  • The “for loop” loops through all in each record using (NF).
  • The “f[$s]++” expression stores each word in an associative array “f” and counts how many times it appears.
  • In the END block, the for loop prints each unique word and its frequency (as a value) from an array.

Example 20: Monitoring the Log Files

To monitor the Apache access log file:

awk '{print $n}' /var/log/apache2/access.log

Similarly, you can keep track of the iptables.log files to track the IPs triggering the firewall logs. You can check out / print the iptables.log for that purpose, i.e., available at the location /var/log/iptables.log.

Example 21: Search and Analyze | AWK with grep

The grep is known for its data filtering and searching, and the awk extracts and manipulates data. Let’s see the practical to check how these utilities work together:

grep "Software" file | awk "{print $3}"

Here’s the breakdown of the above command:

  • grep “Software” will only filter and select the records containing the “Software” keyword.
  • The awk command prints the columns containing the “Software” word.

Similarly, grep and awk can be applied to other log files or system files to filter and analyze the specific log-related information.

Example 22: Substituting | AWK with sed

The “sed” command edits and manipulates the text. So, the output of the awk command can be piped with the sed to perform specific operations on the output, or individually:

Let’ see the practical, using the following simple command:

awk '{print $3, $4}' file | sed 's/ /,/g'

The awk command prints the 3rd and 4th columns of the file, and the sed command substitutes the “,” in place of the white spaces in the document file globally.

Functions in awk

Since awk is a scripting language, it has a long list of functions that are used to perform various functions. Some of these are used in the above examples, i.e., length(n). Here, we will elaborate on a few functions and their use cases.

Substituting in awk with Functions

The awk has “sub” and “gsub” as two substitution functions; let’s understand these through examples:

Note: The file on which the “sub” and “gsub” functions will be experimented with.

Example 23: Substitute the First Occurrence (in each record) Only | awk with sub

The “sub” function substitutes the first occurrence of the matching word/expression in each record. Here is an example command to understand:

awk '{sub("Year", "Y"); print}' file

The first occurrence of the word “Year” will be replaced with the “Y”:

Example 24: Substitute all the instances of a word/Expression | awk with gsub

The “gsub” globally substitutes the matching keyword, i.e., all the occurrences of the matching keyword. Here’s an example:

awk '{ gsub("Year", "Y"); print }' file

Now, all the occurrences will be replaced:

Example 25: Get the UNIX Timestamp | awk with systime()

The awk command has the systime() function to get the UNIX timestamp of the fields or for the whole system. Let’s practice it:

awk 'BEGIN {print systime()}' file

The command will print the UNIX timestamp for each of the records (when modified last time) of the file:

Similarly, you can get the overall UNIX time using the command:

awk 'BEGIN {print systime()}'

Other awk Functions:

Because of its scripting support, the awk has a long list of supported functions. The following table lists some of these and also describes their purpose with syntax:

awk Function Description/Purpose Syntax
length() Length of the current line awk ‘{print length()}’ file
substr(a, b, c) Extracting a substring from string a, the length starts at b, and the overall length is c. awk ‘{print substr($a, b, c)}’ file
split(a,b,c) Split string a into array b using a separator c. awk ‘{split($a, arr, “:”); print arr[a] }’ file
tolower(a) Converts the a to lowercase awk ‘{print tolower($a)}’ file
toupper(a) Converts the a to uppercase awk ‘{print tolower($a)}’ file
int(a) Converts the $a into an integer awk ‘{print int($a)} file’
sqrt(a) Square root of $a. awk ‘{print sqrt($a)}’ file
srand() Seeding a random generator awk ‘BEGIN { srand(); print rand() }’
rand() Random generator

That’s all from this tutorial.

Conclusion

The awk utility is an effective command-line tool that serves as a scripting language, too. From a basic search and find to running it for advanced scripting, the awk is a full package.

This post has a brief overview and explanation of the awk command in Linux, with advanced examples.

FAQs

Q 1: What does awk stand for?

The awk is named after its inventors, i.e., Aho, Weinberger, and Kernighan. They designed awk at AT&T Bell Laboratories in 1977.

Q 2: What is awk in Linux?

Ans: The awk is a powerful command-line utility and a scripting language. The awk is used to: read the data, search and scan for patterns, print, format, calculate, analyze, and more. For these use cases, awk sometimes has to be used with grep, sed, and normal regular expressions.

Q 3: What does awk ‘{print $2}’ mean?

The awk ‘{print $2}’ will print the 2nd (second) field of the file on the terminal. If used with multiple files, then the 2nd (second field of multiple files will be printed.

Q 4: What is the difference between awk and grep?

The grep performs the searching and filtering up to some extent, while the awk utility extracts, manipulates, and analyzes the data. The awk and grep are used together for search and analysis purposes, i.e., grep provides the searching/filtering support, where the awk performs the analysis.

Q 5: What is the difference between awk and bash?

Bash is recommended for professional scripting. However, where scripting and basic operations on the terminal are required, then awk is good. Both awk and bash are scripting languages. However, the simpler tasks are performed swiftly using awk as compared to bash.

Q6: How do I substitute using awk?

The awk supports two functions for substituting, i.e., sub and gsub. The sub is used to substitute the first occurrence of the matching word, whereas the gsub is used for global substitution of the matching word.

 

by: Abhishek Prakash
Wed, 21 May 2025 16:16:09 +0530


Have you ever watched a bearded sysadmin navigate their editor with lightning speed, jumping between multiple panes with the flick of a few keystrokes? That's the magic of Vim's split window feature!

Think of it as having multiple monitors inside a single screen. And you don't even need screen command or tmux tools for this purpose. Vim does it on its own.

One Vim instances with three window splits.
Split Windows in Vim Editor

You can split the screen horizontally as well as vertically. And all this is done with a few keystrokes, of course.

Vim split window keyboard shortcuts

Action Keyboard Shortcut Description
Horizontal split :split or :sp Splits window horizontally
Vertical split :vsplit or :vs Splits window vertically
Close current window :q or :close Closes the active window
Close all except current :only or :on Closes all windows except active one
Navigate between windows Ctrl-w + h/j/k/l Move to left/down/up/right window
Navigate between windows Ctrl-w + Ctrl-w Cycle through all windows
Resize horizontally Ctrl-w + < or > Decrease/increase width
Resize vertically Ctrl-w + - or + Decrease/increase height
Equal size windows Ctrl-w + = Make all windows equal size
Maximize height Ctrl-w + _ Maximize current window height
Maximize width Ctrl-w + | Maximize current window width
Move window Ctrl-w + r Rotate windows
Swap with next window Ctrl-w + x Exchange with next window

You can also use the mouse for resizing and some other features if you have mouse enabled in Vim.

Creating Split windows

Let's see in details about how those magical keys work and look like in the editor.

Horizontal splits

Creating a horizontal split in Vim is like adding a floor to your house - you're stacking views on top of each other.

When you're deep in code and need to reference something above or below your current focus, horizontal splits come to the rescue.

:split filename  (or :sp filename)
0:00
/0:11

Horizontal split in action

Filename is optional here. If you don't specify a filename, Vim will split the window and show the current file in both panes.

💡
Use :set splitbelow to open the new windows below the current one.

Vertical splits

Vertical splits open windows side-by-side. It's good for viewing documentation, or keeping an eye on multiple parts of your project. You can also use it for quickly comparing two files if you do not want to use the dedicated vimdiff.

:vsplit filename  (or :vs filename)
0:00
/0:12

Vertical split in action

💡
By default, the new windows are opened on the left of the current window. Use :set splitright to open the new windows on the right.

Moving between split windows

Once you've created splits, hopping between them is where the real productivity starts. Think of Ctrl-w as your magic wand - when followed by a direction key, it teleports your cursor to that window.

  • Ctrl-w followed by w will switch to the below/right window.
  • Ctrl-w followed by W (uppercase W or Shift-w key) will switch to the above/left window.

I prefer the direction keys, though. It is easier to navigate this way in my opinion.

Ctrl-w h  # Move to the window on the left
Ctrl-w j  # Move to the window below
Ctrl-w k  # Move to the window above
Ctrl-w l  # Move to the window on the right
0:00
/0:15

Move between splits

You can also use the arrow keys instead of the typical hjkl Vim movement keys.

💡
If remembering directions feels like a mental gymnastics routine, just cycle through all the windows by pressing Ctrl-w Ctrl-w. Pressing them multiple times in pair and you'll move from one split window to the next.

I'll be honest - when I first started using Vim, I kept forgetting these window navigation commands. So, I thought of Ctrl-w as "window" followed by the direction I wanted to go. After a few days of practice, my fingers remembered even when my brain forgot!

Resizing split windows

Not all windows are created equal. Some need more space than others, based on your need. Vim lets you adjust your viewing space with a few keystrokes.

Ctrl-w +  # Increase height by one line
Ctrl-w -  # Decrease height by one line
Ctrl-w >  # Increase width by one column
Ctrl-w <  # Decrease width by one column
0:00
/0:21

Resize split windows

For faster resizing, prefix these commands with a number:

10 Ctrl-w +  # Increase height by 10 lines

When things get too chaotic, there's always the great equalizer:

Ctrl-w =  # Make all windows equal size

Just so that you know, you can also create splits with specific dimensions by adding a number before the command.

For example, :10split creates a horizontal split with 10 lines of height, while :30vsplit creates a vertical split that's 30 characters wide.

💡
Need maximum space ASAP? Try these power moves:

Ctrl-w _ maximizes the current window's height
Ctrl-w | maximizes the current window's width
Ctrl-w = equalizes all windows when you're ready to share again

I call this the "focus mode toggle" - perfect for when you need to temporarily zoom in on one particular section of a file!

Moving and rearranging Windows

Sometimes you create the perfect splits but realize they're in the wrong order. Rather than closing and recreating them, you can rearrange your windows like furniture:

Ctrl-w r  # Rotate windows downward/rightward
Ctrl-w R  # Rotate windows upward/leftward
Ctrl-w x  # Exchange current window with the next one
0:00
/0:28

Rearrange split windows

You can also completely move a window to a new position:

Ctrl-w H  # Move current window to far left
Ctrl-w J  # Move current window to very bottom
Ctrl-w K  # Move current window to very top
Ctrl-w L  # Move current window to far right

It's like playing Tetris with your editor layout. While I am a fan of the classic Tetris game, I am not a fan of moving and rearranging the windows unless it is really needed.

💡 Few random but useful tips

Let me share a few more tips that will help your workflow when you are dealing with split windows in Vim.

Add terminal in the mix

If you are in a situation where you want to look at your code and run it at the same time, like an IDE, you can add a terminal in your split. No more alt-tabbing between terminal and editor!

  • :sp | terminal opens a horizontal split with a terminal
  • :vs | terminal opens a vertical split with a terminal
Terminal is split window in Vim

Start split with Vim

Want to start Vim with splits already configured? You can do that from your command line:

# Open two files in horizontal splits
vim -o file1 file2

# Open two files in vertical splits
vim -O file1 file2

# Open three files in horizontal splits
vim -o3 file1 file2 file3

File explorer in split windows

One of my favorite tricks is opening Vim's built-in file explorer (netrw) in a split:

:Sexplore  # Open file explorer in horizontal split
:Vexplore  # Open file explorer in vertical split
Vim file explorer in split window

It's like having a mini file manager right next to your code - perfect for quickly navigating project files without losing your place in the current file.

Close all the split windows and exit Vim

When you are done with your tasks on a project, instead of closing individual split windows, you can close all of them together and exit Vim :qa

Save your work before, of course.

Wrapping up

Splits aren't just a cool feature - they're a strategic tool. You can use it to edit files with reference doc open on the side or to watch log output in the terminal while debugging. It's up to you how you use this amazing feature.

It might seem like keyboard gymnastics at first, but quickly becomes second nature. Like learning to touch type or ride a bike, the initial awkwardness gives way to fluid motions that you'll hardly think about.

Start small - maybe just two vertical splits - and gradually incorporate more commands as you get comfortable. Before long, you'll be that expert terminal dweller others watch in amazement as you effortlessly dance between multiple files without ever touching your mouse. Happy splitting! 🚀

by: Adnan Shabbir
Sun, 18 May 2025 05:36:45 +0000


Linux has evolved over time, from a minimalist interface and tools to supporting state-of-the-art interfaces and applications. In today’s modern era, a Browser is one of the most required applications on any system. Linux distros that come with a GUI by default have some browsers pre-installed, i.e., Firefox, Chromium. Other than the default installed browser, there are more competitive browsers supported by Linux that can be a better choice than the already installed one.

Keeping this in view, I will discuss the top Linux browsers, including the GUI and text-based browsers.

Top 12 Browsers for Linux

From a user’s point of view, there are several factors that influence the browser choice. While choosing a browser, some users prefer a resource-friendly browser, a browser full of features, or a secure browser. If you are unsure, please go through this guide, and you’ll find your browser as per your requirements.

Firefox

Firefox, because of its Free and Open-Source (FOSS) nature, comes by default in most of the Linux distributions, i.e., Ubuntu, Kali. It was introduced in 2004 with the aim of being a competitor of Internet Explorer. Firefox offers various features that make it stand out among most browsers.

Let’s see why anyone should use or opt for Firefox.

Why Firefox?

Over time, Firefox evolved, and it attracted a large number of Linux users. Firefox is well-known for its privacy-oriented features, i.e., cookie protection, tracking protection, and support for DNS over HTTPS.

Firefox is updated every 4 weeks, and the core focus is continuously evolving the privacy and security features.

Firefox has a large number of extensions in its “Add-ons” store. Extensions assist the users in doing specific tasks with one click instead of spending a few minutes on a specific task.

Firefox is highly customizable, which makes it favorable for those looking for some visual appeal in the browser.

Limitations of Firefox:

Although Firefox is well-liked and is no doubt a fully loaded browser. However, it still has some limitations that I want to highlight:

  • The processing is slow and laggy, which is a red flag in this speedy tech era. That’s why the Gen Z audience barely adapts to Firefox.
  • Consumes relatively larger memory than it should, as it has a poor tab process management mechanism.

Want to give it a try? Let’s learn how it can be installed on various Linux distributions.

Install Firefox on Linux

sudo apt install firefox #Debian Derivatives
sudo pacman install firefox #Arch and its Derivatives
sudo pacman install firefox #Arch and its Derivatives

Google Chrome

Google Chrome is also one of the leading browsers for Linux systems. It was introduced in 2008, and since then, it has been gaining popularity day by day because of its amazing strengths, which you might not see in any other browser.

So, let’s dig into the “Why” part:

Why Google Chrome?

Google Chrome releases its stable version every 4 weeks (same as Firefox). Currently, Google Chrome 136 is the latest stable release with security updates in focus as well.

Google Chrome has a large extensions store to integrate various tools, apps with your browser to save time. That’s why Google Chrome’s user experience is better than other browsers in the list.

Google Chrome is a part of Google’s ecosystem, thus, you can integrate Google services with your Chrome profile. This way, multiple accounts can be integrated with multiple Chrome profiles.

Chrome offers some control over the data. Like, protecting your list of passwords, autofill control, managing cookies and sessions up to some extent, indicating if the password is found in a data breach, asking before saving any password, etc.

Limitations of Google Chrome:

  • High resource consumption, i.e., RAM.
  • As it is integrated with Google’s ecosystem, Chrome usually tracks and collects data on the user’s behavior throughout the session.

Chrome has some serious limitations as discussed above, but still used and loved the most.

Install Google Chrome on Linux

Let me take you through the installation methods of Chrome on Linux:

Ubuntu and other Debian Distros:

Chrome is not directly available on Ubuntu’s or Debian’s repository. You have to download the “deb” package file from the Official Website and use the following command:

sudo apt install "./deb-package-name"

Click here to read the complete Installation guide of Chrome on Ubuntu.

Arch-Linux and Its Derivatives:

Get the AUR helper, i.e., yay in this case:

sudo pacman -S --needed base-devel git
git clone https://aur.archlinux.org/yay-git.git
cd yay
makepkg -si

Now, install Chrome on Arch using the following command:

yay -S google-chrome

Detailed insight into installing Google Chrome on Arch Linux.

Fedora:

sudo dnf install fedora-workstation-repositories
sudo dnf install google-chrome-stable

Click here to learn multiple methods of installing Chrome on Fedora.

Opera

Opera is a Chromium-based, partially Open-Source browser. It was first launched in 1995 with the aim of providing a state-of-the-art user experience at that time. Let’s have a look at the “Why”?

Why Opera?

Opera has the same rendering engine (JavaScript-based) as Google Chrome, and thus provides speed assurance while surfing.

Opera provides built-in support for messengers of social platforms, providing a dedicated bar inside the browser.

Opera has built-in support for VPN, which serves through 3 locations only.

Opera supports a number of extensions that assist users in doing several tasks quickly, i.e., a single click to open/manage apps or tools. Apart from Opera’s own extensions, it supports Chrome-based extensions as well.

Limitations of Opera:

  • Resource consumption. Chromium-based, but still utilizes high resources.
  • Since it is a partially open source, and thus the VPN source code is not revealed, which makes it vulnerable, ultimately compromising the privacy of the user. Likewise, the same concern is for the Ad and tracker blocker.

Install Opera on Linux

Snap Supported Distros:

sudo snap install opera

Ubuntu and Other Debian Derivatives:

Click here to see detailed installation methods of Opera on Ubuntu and other Debian derivatives.

Arch and Its Derivatives:

yay -S opera

Brave

In 2016, the co-founder of the Mozilla project introduced Brave to provide privacy. For this, it was launched with a built-in Ad and tracker blocker. Let’s dig into the core details of why this Browser is one of the most used by Linux users:

Why Brave?

Since Brave was introduced to ensure privacy so it integrated Tor browsing, which is the most secure way of browsing. Routing through multiple IPs and ensuring the tracker blocking is what makes Brave a secure browser.

Brave has a unique Ad reward system, known as “Basic Attention Token”. Users can earn these tokens by watching the Ads and supporting the content creators.

Brave is also a Chromium-based browser (equipped with the V8 JS engine), which makes it a fast browser.

Brave supports a list of Chrome-based extensions to ensure the availability of maximum features to the users. Moreover, it has cross-platform support available, i.e., you can integrate the saved bookmarks, browsing history, and other settings to another platform.

Limitations of Brave:

  • The aggressive tracker and ad-blocking feature of Brave blocks various useful extensions and sites either completely or partially, which impacts the user experience.

Now, let’s explore the ways to install Brave on Linux:

Install Brave Browser on Linux

You can get Brave on Linux using one shell script:

curl -fsS https://dl.brave.com/install.sh | sh

Ubuntu and Other Debian Derivatives:

Click here to get detailed instructions for installing Brave on Ubuntu and Debian Derivatives

Arch and Its Derivatives:

Click here to install Brave on Arch and its derived distributions.

Chromium

Google Chrome is a free and open-source browser developed and maintained by Google (under the Chromium project). It was first released in 2008 and named after the Chromium metal, which is used to create Chrome plates. Let’s see why Chromium is one of the best browser choices.

Why Chromium?

Chromium is open source, which makes it favorable for Linux users, and it was developed by Google, so most of the Chrome-like features are already there.

Chromium also allows you to get extensions from the Chrome Web Store and from external sources, resulting in a large number of extensions for a better user experience.

The browser’s source code can be modified, but only Google-authorized developers are allowed to do so.

When compared with Chrome, Chromium is more privacy-oriented than Google Chrome, i.e., it installs updates manually and it does not track or share user data.

Limitations of Chromium:

Chromium is also a resource-intensive browser, making it hard for people looking for a hardware-friendly browser, and the Chromium codebase is the major reason behind this.

Install Chromium on Linux

Ubuntu and Other Debian Derivatives:

sudo apt install chromium-browser

Read this guide for detailed installation instructions.

Snap supported Distros:

sudo snap install chromium

Arch and Its Derivatives:

sudo pacman -S chromium

Fedora:

sudo dnf install chromium

Vivaldi

Vivaldi is another Chromium-based browser, introduced in 2015 by Vivaldi. It was developed and considered as an alternative to the Opera browser.

Why Vivaldi?

Since Vivaldi is Chromium-based, its UI is customizable, and users have a variety of themes, layout options to experience a unique feel.

Vivaldi’s quick command line support allows you to navigate between browsers, create Vivaldi notes, and scroll through browser history. Just write the keyword in the command search tab, and a list of commands is shown with their purpose.

Vivaldi is equipped with a built-in mail client and RSS feed reader, which you may not get by default in other browsers.

Limitations of Vivaldi:

  • Vivaldi is not completely open-source, with some of its features on a closed-source list.
  • Although it offers a customizable UI but it can result in consuming high hardware resources.

Install Vivaldi on Linux

Ubuntu and Other Debian Derivatives:

Vivaldi is not directly available on the repositories of Ubuntu or other Debian derivatives. However, you can get the “.deb” package from the official Vivaldi site. Once the “deb” package is downloaded, you can use the following command to install it:

sudo apt install "./path-to-deb-file"

Note: Follow this guide for a detailed installation method.

Snap Supported Distros:

Users of those Linux distributions where the snap is functional can use the following command to install Vivaldi on the system:

sudo snap install vivaldi

Flatpak Supported Distros:

Ensure that your system has Flatpak installed and it is connected to Flathub. Then, use the following command to install it:

flatpak install flathub com.vivaldi.Vivaldi

Tor (The Onion Router)

Tor is the most secure browser so far, introduced in 2002, with an aim to create the first ever anonymous browser. It is managed and maintained under the Tor Project.

Why Tor?

When a request is sent through Tor, it passes through a multi-layered routing, one layer after another. This multi-layer routing makes it impossible to detect and trace a user or the location of the user.

Tor usually works on the “.onion” links, which only work on the Onion routing, i.e., Tor. This makes Tor a browser for specific use (helping the government and corporate sector work anonymously to achieve specific goals).

Limitations of Tor:

  • The multi-layered routing puts extra load on the system resources, which is not good for users looking for resource-friendly browsers.
  • Takes more time to load/start.
  • Anonymity is usually utilized in illegal activities (hacking, dark web, etc).

Install Tor on Linux

Ubuntu and other Debian Derivatives:

sudo apt install torbrowser-launcher

Note: Follow this guide for detailed installation instructions for Tor on Ubuntu.

Flatpak Supported Distros:

flatpak install flathub org.torproject.torbrowser-launcher

Fedora:

To install Tor on Fedora, you have to first integrate the Tor project with Fedora’s package repository and then proceed with the installation. Get brief info on this at Tor’s official page.

Falkon

Falkon was initially introduced in 2010 with the name “QupZilla”. Later in 2017, KDE adapted it and renamed it from “QupZilla” to “Falkon”.

Why Falkon?

Since it is a KDE-owned browser, it works well and is integrated with the KDE environment (a desktop environment on Linux).

Falkon is a resource-friendly browser, making it well-liked among users working in a resource-constrained environment.

It offers a built-in Ad blocker and some privacy controls, which are enough for a normal user and thus nullify the need to install any other service.

Limitations of Falkon:

  • Falkon is very straightforward and resource-friendly with limited engine updates. Because of that, it sometimes behaves abnormally when modern web standards are encountered, i.e., dynamic sites, high-end graphic visuals, JavaScript-enriched sites.
  • It does not have enough extension support as compared to other browsers.

Install Falkon on Linux

sudo apt install falkon #Debian Derivatives
flatpak install flathub org.kde.falkon #Flatpak Supported Distros
sudo snap install falkon #Snap Supported Distros

Read this guide for detailed installation instructions using Snap.

Midori

Midori was introduced in 2007 as a part of the XFCE project and aimed to offer a simple, fast, and lightweight solution for Linux users.

Why Midori?

Midori is not modern in visuals, but effective for hardware-conscious users, i.e., old hardware, low hardware specs, embedded systems.

It has a notably low memory consumption, which makes it boot and perform fast.

It supports low-level tools for tracker and cookie blocking, providing essential privacy to users.

Limitations of Midori:

  • Low support for the extensions and advanced customization.
  • Neither intermediate nor advanced security measures are supported.

Install Midori on Linux:

Ubuntu and Other Debian Derivatives:

sudo apt install midori

Snap Supported Distros:

sudo snap install midori

Note: Remember to configure and enable snapd, or else you will get an error while installing.

Flatpak Supported Distros:

flatpak install flathub org.midori_browser.Midori

These were the most used and recommended GUI browsers for Linux users.

Lynx | Text-Based Browser

Lynx is an open-source and command-line browser for linux systems. It was introduced in 1992 by a group of researchers from the University of Kansas.

Why Lynx?

Lynx was aimed at command-line browsing and is still used in Linux servers to keep the GUI exposure as low as possible.

Lynx allows a limited number of operations to track the user data. However, it provides control over the cookies, users can manage if the cookies are allowed or disallowed.

It is a preferred browser while communicating with a system through SSH, Telnet, or any other terminal-based connections.

Because of its only command line support, Lynx is well-supported and recommended for resource-friendly systems.

Limitations of Lynx:

  • Only command-line operations.
  • The search result is provided as a formatted text on the terminal screen, which might not be suitable for all Linux users or users shifting to Linux.
  • Only recommended when browsing is not being done frequently.

Install Lynx on Linux

sudo apt install lynx Ubuntu and Other Debian Distros
sudo dnf install lynx Fedora and other dnf-supported Distros
sudo pacman -S lynx Arch and its Distros

Browsh | Text-Based Browser

Browsh is another text-based browser for Linux. However, it is modern as compared to the Lynx as it supports a GUI but in a controlled manner. For GUI rendering, the user must have Firefox installed.

Why Browsh?

Browsh supports a basic graphics element renderer (CSS/JS) to offer limited GUI support for the search results.

Browsh does not allow sharing the user data. However, while processing web pages, the cookies need to be managed manually.

Being a text-based browser, it is lightweight and supports old hardware or systems with low hardware resources.

Browsh is also used when communicating through SSH, Telnet, etc., the remote connections to the systems.

Limitations of Browsh:

  • Although it supports a modern GUI renderer, advanced graphics have not yet been supported inside Browsh. Thus, when displaying the graphics, a few of them pixelate or do not show up properly.

Install Browsh on Linux

sudo apt install browsh #Ubuntu and Other Debian Distros
sudo dnf install browsh #Fedora and other dnf-supported Distros
sudo pacman -S browsh #Arch and its Distros:

W3m | Text-Based Browser

W3m was initially introduced in 1995 as a text-based browser for the Unix-derived operating systems. Since then, it has been adapted at a larger scale by Linux users to browse in a non-GUI environment.

Why W3m?

Like other text-based browsers, it is also resource-friendly, takes no time to start, and browses the data with an optimal speed.

It is usually used in Linux servers and for remote browsing through remote connection protocols, i.e., SSH, Telnet.

With time, W3m has been updated and now it provides a more user-friendly text-based interface, i.e., inline images and interactive results.

Limitations of W3m:

  • W3m needs to be configured to show inline images and SSL-encrypted pages. If not configured properly, it will show abnormal results in the terminal.

Install W3m on Linux

sudo apt install w3m #Ubuntu and Other Debian Distros
sudo dnf install w3m #Fedora and Other dnf-supported Distros
sudo pacman -S w3m #Arch and Its Derivatives

That’s all from the list of top Linux browsers.

Comparison of the Browsers | Which one to choose?

Now that you have gone through the top browsers for Linux. Let me provide you with a comparison chart of the browsers. Here, I have considered notable parameters that a user should consider before switching to another browser:

Browser System Resource Usage Privacy Customization Extension Support Rendering Engine Updates Source-Code
GUI Browsers
Firefox Medium Medium Medium High Gecko Regular FOSS
Chrome High Low Low High Blink Regular Proprietary
Tor High High Low Medium Gecko Regular FOSS
Opera Medium Ad and Tracker blocker,

VPN

High, i.e., sidebar, themes, workspaces High Blink Regular Partially Open-Source
Brave Low Ad and Tracker blocker Medium Medium Blink Regular FOSS
Chromium Low Low Low Medium Blink Regular FOSS
Vivaldi Low Tracker blocker Low High Blink Regular Partially Open-Source
Falkon Very Low Low Low Medium QtWebEngine Not Regular FOSS
Midori Very Low Low Low Medium WebKit Not Regular FOSS
Terminal/Text-Based Browsers
Lynx Very Low Low Low Internal Not Regular FOSS
w3m Very Low Low Low Internal Not Regular
Browsh Very Low Low Low Geck-based Not Regular

That’s all. Choose your browser wisely.

Conclusion

The top linux browsers for 2025 are: Google Chrome, Firefox, Opera, Brave, Chromium, Vivaldi, Tor, Falkon, Midori, Lynx, Browsh, and W3m. Each browser is chosen based on some factors, i.e., some offer advanced features, security, resource consumption, and text-based interfaces. You just need to see which browser fulfills your requirements and just go for it.

I have provided a list of the most used Browsers on Linux and demonstrated a brief comparison so that a user can easily pick a browser as per their requirements.

by: Abhishek Prakash
Fri, 16 May 2025 17:00:52 +0530


In the previous edition, I asked your opinion on the frequency of the newsletters. Out of the all the responses I got, 76% members want it on a weekly basis.

Since we live in a democratic world, I'll go with the majority here. I hope the rest 24% won't mind seeing the emails once each week ;)

Here are the highlights of this edition :

  • TCP Proxy with socat
  • Out of memory killer explained
  • Nerdlog for better log viewing
  • And regular dose of tips, tutorials and memes

🚀 Elevate Your DevOps Career – Up to 50% OFF!

Linux Foundation Sale

This May, the Linux Foundation is offering 50% off on certifications with THRIVE Annual Subscriptions, 40% off on training courses, and 10% off on THRIVE access.

Top Bundles:

  • LFCS + THRIVE — Master Linux Administration
  • CKA + THRIVE — Become a Kubernetes Pro
  • CKAD + THRIVE — Level up Kubernetes Development
  • CKS + THRIVE — Specialize in Kubernetes Security

Offer ends May 20, 2025!

by: LHB Community
Thu, 15 May 2025 15:44:09 +0530


A TCP proxy is a simple but powerful tool that sits between a client and a server and is responsible for forwarding TCP traffic from one location to another. It can be used to redirect requests or provide access to services located behind a firewall or NAT. socat is a handy utility that lets you establish bidirectional data flow between two endpoints. Let's see how you can use it to set up a TCP proxy.

A lightweight and powerful TCP proxy tool is socat (stands for "SOcket CAT)". It establishes a bidirectional data flow between two endpoints. These endpoints can be of many types, such as TCP, UDP, UNIX sockets, files, and even processes.

As a former developer and sysadmin, I can't count the number of times I've used socat, and it's often saved me hours of troubleshooting.🤯

Whether it's testing a service behind the company firewall, redirecting traffic between local development environments, or simply trying to figure out why one container isn't communicating with another. It's one of those tools that, once you understand what it can do, is amazing. How many problems can be solved with just one line of command?

In this tutorial, you will learn how to build a basic TCP proxy using socat. By the end of the tutorial, you'll have a working configuration that listens on a local port and forwards incoming traffic to a remote server or service. This is a fast and efficient way to implement traffic proxying without resorting to more complex tools.

Let's get started!

Prerequisites

This tutorial assumes you have a basic knowledge of TCP/IP networks. 

# Debian/Ubuntu
sudo apt-get install socat
# macOS (Homebrew)
brew install socat

Understanding the basic socat command syntax

Here’s the basic socat syntax:

socat <source> <destination>

These addresses can be in the following format:

  • TCP4-LISTEN:<port>
  • TCP4-<host>:<port>

The point is: all you have to do is tell socat “where to receive the data from” and “where to send the data to,” and it will automatically do the forwarding in both directions.

Setting up a basic TCP proxy

Let’s say you have a TCP server working on localhost (loopback interface). Maybe some restrictions prevent you from modifying the application to launch it on a different interface. Now, there’s a scenario where you need to access the service from another machine in the LAN network. Socat comes to the rescue.

Example 1: Tunneling Android ADB

First, we established a connection with the Android device via ADB, and then we restart the adb daemon in TCP/IP mode.

adb devices
adb tcpip 5555

On some devices, running this adb tcpip 5555 command will expose the service on LAN interface, but in my setup, it doesn’t. So, I decided to use Socat.

socat tcp4-listen:5555,fork,reuseaddr,bind=192.168.1.33 tcp4:localhost:5555

A quick reminder, your LAN IP would be different, so adjust the bind value accordingly. You can check all your IPs via ifconfig.

using adb for proxy

Example 2: Python server

We’ll use Python to start a TCP server on the loopback interface just for demonstration purposes. In fact, it will start an HTTP server and serve the contents of the current directory, but under the hood, HTTP is a TCP connection.

🚧
Start this command from a non-sensitive directory.
python -m http.server --bind 127.0.0.1

This starts an HTTP server on port 8000 by default. Now, let’s verify by opening localhost:8000 in the browser or using a curl request.

curl http://localhost:8000
localhost connection

What if we do curl for the same port, but this time for the IP assigned by the LAN? It’s not working, right?

localhost connection failed
socat tcp4-listen:8005,fork,reuseaddr,bind=192.168.1.33 tcp4:localhost:8000

Now, establish the connection on port 8005.

localhost connection success

When establishing a connection through the different devices to http://192.168.1.33:8005, you might get a connection refused error because of firewall rules. You can add a firewall rule to access the service in that case.

You can refer to our tutorial on using UFW to manage firewall for more details. Here are the commands to do the job quickly:

sudo ufw allow 8005/tcp
sudo ufw status
ufw firewall add

Conclusion

Whether you are proxying between containers or opening services on different ports, socat proves to be a versatile and reliable tool. If you need a quick and easy proxy setup, give it a try — you'll be amazed at how well it integrates with your workflow.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: LHB Community
Mon, 12 May 2025 10:43:21 +0530


Automating tasks is great, but what's even better is knowing when they're done or if they've gotten derailed.

Slack is a popular messaging tool used by many techies. And it supports bots that you can configure to get automatic alerts about things you care about.

Web server is down? Get an alert. Shell script completes running? Get an alert.

Yes, that could be done too. By adding Slack notifications to your shell scripts, you can share script outcomes with your team effortlessly and respond quickly to issues and stay in the loop without manual checks. It lets you monitor automated tasks without constantly checking logs. 

🚧
I am assuming you already use Slack and you have a fair idea about what a Slack Bot is. Of course, you should have at least basic knowledge of Bash scripting.

The Secret Sauce: curl and Webhooks

The magic behind delivering Slack notifications from shell scripts is Slack's Incoming Webhooks and the curl command line tool.

Basically, everything is already there for you to use, it just needs some setup for connections. I found it pretty easy, and I'm sure you will too.

Here are the details for what webhooks and the command line tool is for:  

  • Incoming Webhooks: Slack allows you to create unique Webhook URLs for your workspace that serve as endpoints for sending HTTP POST requests containing messages.  
  • curl: This powerful command-line tool is great for making HTTP requests. We'll use it to send message-containing JSON payloads to Slack webhook URLs.

Enabling webhooks on Slack side

  1. Create a Slack account (if you don't have it already) and (optionally) create a Slack workspace for testing.
  2. Go to api.slack.com/apps and create a new app.
create a new slack app
  1. Open the application and, under the “Features” section, click on “Incoming Webhooks” and “Activate Incoming Webhooks”.
slack webhook activate
  1. Under the same section, scroll to the bottom. You’ll find a button “Add New Webhook to Workspace”. Click on it and add the channel.
webhook connection to workspace
  1. Test the sample CURL request. 

Important: The CURL command you see above also has the webhook URL. Notice that https://hooks.slack.com/services/xxxxxxxxxxxxx things? Note it down.

Sending Slack notifications from shell scripts

Set SLACK_WEBHOOK_URL environment variable in your .bashrc file as shown below.

webhook url
Use the webhook URL you got from Slack in the previous step

Create a new file, notify_slack.sh, under your preferred directory location.

# Usage: notify_slack "text message"
# Requires: SLACK_WEBHOOK_URL environment variable to be set
notify_slack() {
    local text="$1"
    curl -s -X POST -H 'Content-type: application/json' \
        --data "{\"text\": \"$text\"}" \
        "$SLACK_WEBHOOK_URL"
}

Now, you can simply source this bash script wherever you need to notify Slack. I created a simple script to check disk usage and CPU load.

source ~/Documents/notify_slack.sh 
disk_usage=$(df -h / | awk 'NR==2 {print $5}')
# Get CPU load average
cpu_load=$(uptime | awk -F'load average:' '{ print $2 }' | cut -d',' -f1 | xargs)
hostname=$(hostname)
message="*System Status Report - $hostname*\n* Disk Usage (/): $disk_usage\n* CPU Load (1 min): $cpu_load"
# Send the notification
notify_slack "$message"

Running this script will post a new message on the Slack channel associated with the webhook.

Slack notifications from bash shell

Best Practices 

It is crucial to think about security and limitations when you are integrating things, no matter how insignificant you think it is. So, to avoid common pitfalls, I recommend you to follow these two tips:

  • Avoid direct hardware encoding in publicly shared scripts. Consider using environment variables or configuration files.
  • Be aware of Slack's rate limitation for incoming webhooks, especially if your scripts may trigger notifications frequently. You may want to send notifications only in certain circumstances (for example, only on failure or only for critical scripts).

Conclusion

What I shared here was just a simple example. You can utilize cron in the mix and periodically send notifications about server stats to Slack. You put in some logic to get notified when disk usage reaches a certain stage.

There can be many more use cases and it is really up to you how you go about using it. With the power of Incoming Webhooks and curl, you can easily deliver valuable information directly to your team's communication center. Happy scripting!

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: Abhishek Prakash
Fri, 09 May 2025 20:17:53 +0530


In the past few months, some readers have requested to increase the frequency of the newsletter to weekly, instead of bi-monthly.

What do you think? Are you happy with the current frequency, or do you want these emails each week?

Also, what would you like to see more? Linux tips, devops tutorials or lesser known tools?

Your feedback will shape this newsletter. Just hit the reply button. I read and answer to each of them.

Here are the highlights of this edition :

  • TaskCrafter: A YAML-based task scheduler
  • Docker logging guide
  • cdd command (no, that's not a typo)
  • This edition of LHB Linux Digest is supported by ANY.RUN.

🎫 Free Webinar | How SOC Teams Save Time with ANY.RUN: Action Plan

Trusted by 15,000+ organizations, ANY.RUN knows how to solve SOC challenges. Join team leads, managers, and security pros to learn expert methods on how to:  

  • Increase detection of complex attacks  
  • Speed up alert & incident response  
  • Improve training & team coordination  

Book your seat for the webinar here.

How SOC Teams Save Time and Effort with ANY.RUN: Action Plan
Discover expert solutions for SOC challenges, with hands-on lessons to improve detection, triage, and threat visibility with ANY.RUN.
by: LHB Community
Tue, 06 May 2025 18:08:50 +0530


Anyone who works in a terminal, Linux or Windows, all the time knows that one of the most frequently used Linux commands is "cd" (change directory).

Many people have come up with tools to change the current directory intuitively. Some people use the CDPATH environment variable while some go with zoxide, but which doesn't suit my needs.

So I created a tool that works for me as a better alternative to the cd command.

Here's the story.

Why did I build a cd command alternative?

In my daily work, I've used the cd command a few dozen times (that's about the order of magnitude). I've always found it annoying to have to retype the same paths over and over again, or to search for them in the history.

By analyzing my use of “cd” and my command history, I realized that I was most often moving through fifty or so directories, and that they were almost always the same.

Below is the command I used, which displays the number of times a specific directory is the target of a “cd” command:

history | grep -E '^[ ]*[0-9]+[ ]+cd ' | awk '{print $3}' | sort | uniq -c | sort -nr

Here's how it works step by step:

  1. history: Lists your command history with line numbers
  2. grep -E '^[ ]*[0-9]+[ ]+cd ': Filters only lines that contain the cd command (with its history number)
  3. awk '{print $3}': Extracts just the directory path (the 3rd field) from each line
  4. sort: Alphabetically sorts all the directory paths
  5. uniq -c: Counts how many times each unique directory appears
  6. sort -nr: Sorts the results numerically in reverse order (highest count first)

The end result is a frequency list showing which directories you've changed to most often, giving you insights into your most commonly accessed directories.

The above command won't work if you have timestamp enabled in command history.

From this observation, I thought, why not use a mnemonic shortcut to access the most used directories.

So that's what I did, first for the Windows terminal, years ago, quickly followed by a port to Linux.

Meet cdd

Today cdd is the command I use the most in a console. Simple and very efficient.

GitHub - gsinger/cdd: Yet another tool to change current directory efficiently
Yet another tool to change current directory efficiently - gsinger/cdd

With cdd, you can:

  • Jump to a saved directory by simply typing its shortcut.
  • Bind any directory to a shortcut for later use.
  • View all your pre-defined shortcuts along with their directory paths.
  • Delete any shortcut that you no longer need.
0:00
/1:01

Installing cdd

The source is available here.

The cdd_run file can be copied anywhere in your system. Don't forget to make it executable (chmod +x ./cdd_run)

Because the script changes the current directory, it cannot be launched in a different bach process from your current session. It must be launched by the source command. Just add the alias in your ~/.bashrc file:

alias cdd='source ~/cdd_run'

Last step: Restart your terminal (or run source ~/.bashrc).

Running cdd without argument displays the usage of the tool.

In the end...

I wanted a short name that was not too far from "cd". My muscle memory is so used to "cd" that adding just a 'd' was the most efficient in terms of speed.

I understand that cdd may not be a tool for every Linux user. It's a tool for me, created by for my needs, and I think there might be a few people out there who would like it as much as I do.

So, are you going to be one of them? Please let me know in the comments.

This article has been contributed by Guillaume Singer, developer of the cdd command.

by: Pranav Krishna
Tue, 29 Apr 2025 09:53:03 +0530


In this series of managing the tmux utility, the first level division, panes, are considered.

Panes divide the terminal window horizontally or vertically. Various combinations of these splits can result in different layouts, according to your liking.

Tmux window splitting into panes
Pane split of a tmux window

This is how panes work in tmux.

Creating Panes

Take into focus any given pane. It could be a fresh window as well.

The current window can be split horizontally (up and down) with the key

[Ctrl+B] + "
horizontal split
Horizontal Split

And to split the pane vertically, use the combination

[Ctrl+B] + %
vertical split
Vertical Split

Resizing your panes

Tmux uses 'cells' to quantify the amount of resizing done at once. To quantify, this is what resizing by 'one cell' looks like. One more character can be accommodated on the side.

resize by one cell
Resizing by 'one cell'

The combination part is a bit tricky for resizing. Stick with me.

Resize by one cell

Use the prefix Ctrl+B followed by Ctrl+arrow keys to resize in the required direction.

[Ctrl+B] Ctrl+arrow

This combination takes a fair number of keypresses, but can be precise.

0:00
/0:08

Resize by five cells (quicker)

Instead of holding the Ctrl key, you could use the Alt key to resize faster. This moves the pane by five cells.

[Ctrl+B] Alt+arrow
0:00
/0:12

Resize by a specific number of cells (advanced)

Just like before, the command line options can resize the pane to any number of cells.

Enter the command line mode with

[Ctrl+B] + :

Then type

resize-pane -{U/D/L/R} xx
  • U/D/L/R represents the direction of resizing
  • xx is the number of cells to be resized

To resize a pane left by 20 cells, this is the command:

resize-pane -L 20
0:00
/0:06

Resizing left by 20 cells

Similarly, to resize a pane upwards, the -U tag is used instead.

0:00
/0:05

Resizing upwards by 15 cells

This resize-pane command could be primarily incorporated into reprogramming a tmux layout whenever a new session is spawned.

Conclusion

Since the pane lengths are always bound to change, knowing all the methods to vary the pane sizes can come in handy. Hence, all possible methods are covered.

Pro tip 🚀 - If you make use of a mouse with tmux, your cursor is capable of resizing the panes.

0:00
/0:15

Turning on mouse mode and resizing the panes

Go ahead and tell me which method you use in the comments.

by: Abhishek Prakash
Fri, 25 Apr 2025 21:30:04 +0530


Choosing the right tools is important for an efficient workflow. A seasoned Fullstack dev shares his favorites.

7 Utilities to Boost Development Workflow Productivity
Here are a few tools that I have discovered and use to improve my development process.

Here are the highlights of this edition :

  • The magical CDPATH
  • Using host networking with docker compose
  • Docker interview questions
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️ Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
by: Abhishek Prakash
Fri, 25 Apr 2025 20:55:16 +0530


If you manage servers on a regular basis, you'll often find yourself entering some directories more often than others.

For example, I self-host Ghost CMS to run this website. The Ghost install is located at /var/www/ghost/ . I have to cd to this directory and then use its subdirectories to manage the Ghost install. If I have to enter its log directory directly, I have to type /var/www/ghost/content/log.

Typing out ridiculously long paths that take several seconds even with tab completion.

Relatable? But what if I told you there's a magical shortcut that can make those lengthy directory paths vanish like free merchandise at a tech conference?

Enter CDPATH, the unsung hero of Linux navigation that I'm genuinely surprised that many new Linux users are not even aware of!

What is CDPATH?

CDPATH is an environment variable that works a lot like the more familiar PATH variable (which helps your shell find executable programs). But instead of finding programs, CDPATH helps the cd command find directories

Normally, when you use cd some-dir, the shell looks for some-dir only in the current working directory.

With CDPATH, you tell the shell to also look in other directories you define. If it finds the target directory there, it cds into it — no need to type full paths.

How does CDPATH works?

Imagine this directory structure:

/home/abhishek/
├── Work/
│   └── Projects/
│       └── WebApp/
├── Notes/
└── Scripts/

Let's say, I often visit the WebApp directory and for that I'll have to type the absolute path if I am at a strange location:

cd /home/abhishek/Work/Projects/WebApp

Or, since I am a bit smart, I'll use ~ shortcut for home directory.

cd ~/Work/Projects/WebApp

But if I add this location to the CDPATH variable:

export CDPATH=$HOME/Work/Projects

I could enter WebApp directory from anywhere in the filesystem just by typing this:

cd WebApp

Awesome! Isn't it?

🚧
You should always add . (current directory) in the CDPATH and your CDPATH should start with it. This way, it will look for the directory in the current directory first and then in the directories you have specified in the CDPATH variable.

How to set CDPATH variable?

Setting up CDPATH is delightfully straightforward. If you ever added anything to the PATH variable, it's pretty much the same.

First, think about the frequently used directories where you would want to cd to search for when no specific paths have been provided.

Let's say, I want to add /home/abhishek/work and /home/abhishek/projects in CDPATH. I would use:

export CDPATH=.:/home/abhishek/work:/home/abhishek/projects

This creates a search path that includes:

  1. The current directory (.)
  2. My work directory
  3. My projects directory

Which means if I type cd some_dir, it will first look if some_dir exists in the current directory. If not found, it searches

🚧
The order of the directories in CDPATH matters.

Let's say that both work and projects directories have a directory named docs which is not in the current directory.

If I use cd docs, it will take me to /home/abhishek/work/docs. Why? because work directory comes first in the CDPATH.

💡
If things look fine in your testing, you should make it permanent by adding the "export CDPATH" command you used earlier to your shell profile.

Whatever you exported in CDPATH will only be valid for the current session. To make the changes permanent, you should add it to your shell profile.

I am assuming that you are using bash shell. In that case, it should be /.profile~ or ~/.bash_profile.

Open this file with a text editor like Nano and add the CDPATH export command to the end.

📋
When you use cd command with absolute path or relative path, it won't refer to the CDPATH. CDPATH is more like, hey, instead of just looking into my current sub-directories, search it in specified directories, too. When you specify the full path (absolute or relative) already with cd, there is no need to search. cd knows where you want to go.

How to find the CDPATH value?

CDPATH is an environment variable. How do you print the value of an environment variable? Simplest way is to use the echo command:

echo $CDPATH
📋
If you have tab completion set with cd command already, it will also work for the directories listed in CDPATH.

When not to use CDPATH?

Like all powerful tools, CDPATH comes with some caveats:

  1. Duplicate names: If you have identically named directories across your filesystem, you might not always land where you expect.
  2. Scripts: Be cautious about using CDPATH in scripts, as it might cause unexpected behavior. Scripts generally should use absolute paths for clarity.
  3. Demo and teaching: When working with others who aren't familiar with your CDPATH setup, your lightning-fast navigation might look like actual wizardry (which is kind of cool to be honest) but it could confuse your students.
💡
Including .. (parent directory) in your CDPATH creates a super-neat effect: you can navigate to 'sibling directories' without typing ../. If you're in /usr/bin and want to go to /usr/lib, just type cd lib.

Why aren’t more sysadmins using CDPATH in 2025?

The CDPATH used to be a popular tool in the 90s, I think. Ask any sysadmin older than 50 years, and CDPATH would have been in their arsenal of CLI tools.

But these days, many Linux users have not even heard of the CDPATH concept. Surprising, I know.

Ever since I discovered CDPATH, I have been using it extensively specially on the Ghost and Discourse servers I run. Saves me a few keystrokes and I am proud of those savings.

By the way, if you don't mind including 'non-standard' tools in your workflow, you may also explore autojump instead of CDPATH.

GitHub - wting/autojump: A cd command that learns - easily navigate directories from the command line
A cd command that learns - easily navigate directories from the command line - wting/autojump

🗨️ Your turn. Were you already familiar with CDPATH? If yes, how do you use it? If not, is this something you are going to use in your workflow?

by: Ankush Das
Fri, 25 Apr 2025 10:58:48 +0530


As an engineer who has been tossing around Kubernetes in a production environment for a long time, I've witnessed the evolution from manual kubectl deployment to CI/CD script automation, to today's GitOps. In retrospect, GitOps is really a leap forward in the history of K8s Ops.

Nowadays, the two hottest players in GitOps tools are Argo CD and Flux CD, both of which I've used in real projects. So I'm going to talk to you from the perspective of a Kubernetes engineer who has stepped in the pits: which one is better for you?

Why GitOps?

The essence of GitOps is simple: 

“Manage your Kubernetes cluster with Git, and make Git the sole source of truth.”

This means: 

  • All deployment configurations are written in Git repositories
  • Tools automatically detect changes and deploy updates
  • Git revert if something goes wrong, and everything is back to normal
  • More reliable for auditing and security.

I used to maintain a game service, and in the early days, I used scripts + CI/CD tools to do deployment. Late one night, something went wrong, and a manual error pushed an incorrect configuration into the cluster, and the whole service hung. Since I started using GitOps, I haven't had any more of these “man-made disasters”.

Now, let me start comparing Argo CS vs Flux CD.

Installation & Setup

Argo CD can be installed with a single YAML, and the UI and API are deployed together out of the box.

Here are the commands that make it happen:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
argo cd installation

Flux CD follows a modular architecture, you need to install Source Controller, Kustomize Controller, etc., separately. You can also simplify the process by flux install.

curl -s https://fluxcd.io/install.sh | sudo bash
flux --version
flux install --components="source-controller,kustomize-controller"
kubectl get pods -n flux-system
flux cd installation

For me, the winner here is: Argo CD (because of more things out of the box in a single install setup).

Visual Interface (UI) 

argo cd ui
Argo CD UI

Argo CD has a powerful built-in Web UI to visually display the application structure, compare differences, synchronize operations, etc.

Unfortunately, Flux CD has no UI by default. It can be used with Weave GitOps or Grafana to check the status because it relies on the command line primarily.

Again, winner for me: Argo CD, because of a web UI.

Synchronization and Deployment Strategies 

Argo CD supports manual synchronization, automatic synchronization, and forced synchronization, suitable for fine-grained control.

Flux CD uses a fully automated synchronization strategy that polls Git periodically and automatically aligns the cluster state.

Flux CD gets the edge here and is the winner for me.

Toolchain and Integration Capabilities 

Argo CD supports Helm, Kustomize, Jsonnet, etc. and can be extended with plugins.

Flux CD supports Helm, Kustomize, OCI mirroring, SOPS encryption configuration, GitHub Actions, etc., the ecology is very rich.

Flux CD is the winner here for its wide range of integration support.

Multi-tenancy and Privilege Management 

Argo CD has built-in RBAC, supports SSOs such as OIDC, LDAP, and fine-grained privilege assignment.

Flux CD uses Kubernetes' own RBAC system, which is more native but slightly more complex to configure.

If you want ease of use, the winner is Argo CD.

Multi-Cluster Management Capabilities 

Argo CD supports multi-clustering natively, allowing you to switch and manage applications across multiple clusters directly in the UI.

Flux CD also supports it, but you need to manually configure bootstrap and GitRepo for multiple clusters via GitOps. 

Winner: Argo CD 

Security and Keys 

Argo CD is usually combined with Sealed Secrets, Vault, or through plugins to realize SOPS. 

Flux CD supports native integration for SOPS, just configure it once, and it's very easy to decrypt automatically.

Personally, I prefer to use Flux + SOPS in security-oriented scenarios, and the whole key management process is more elegant.

Performance and Scalability 

Flux CD controller architecture naturally supports horizontal scaling with stable performance for large-scale environments.

Argo CD features a centralized architecture, feature-rich but slightly higher resource consumption.

Winner: Flux CD 

Observability and Problem Troubleshooting 

Real-time status, change history, diff comparison, synchronized logs, etc. are available within the Argo CD UI.

Flux CD relies more on logs and Kubernetes Events and requires additional tools to assist with visualization.

Winner: Argo CD 

Learning Curve 

Argo CD UI is intuitive and easy to install, suitable for GitOps newcomers to get started.

Flux CD focuses more on CLI operations and GitOps concepts, and has a slightly higher learning curve.

Argo CD is easy to get started.

GitOps Principles 

Flux CD follows GitOps principles 100%: all declarative configurations, cluster auto-aligning Git. 

Argo CD supports manual operations and UI synchronization, leaning towards "Controlled GitOps".

While Argo CD has a lot of goodies, if you are a stickier for principles, then Flux CD will be more appealing to you.

Final Thoughts

Argo CD can be summed up as, quick to get started, comes with a web interface

Seriously, the first time I used Argo CD, I had a feeling of “relief”.

After deployment, you can open the web UI and see the status of each application, deploy with one click, rollback, compare Git and cluster differences - for people like me who are used to kubectl get, it's like a boon for information overload.

Its “App of Apps ”model is also great for organizing large configurations. For example, I use Argo to manage different configuration repos in multiple environments (dev/stage/prod), which is very intuitive.

On the downside, it's a bit “heavy”. It has its API server, UI, Controller, which takes up a bit of resources.

You have to learn its Application CRD if you want to adjust the configuration. Argo CD even provides CLI for application management and cluster automation.

Here are the commands that can come in handy for the purpose stated above:

argocd app sync rental-app
argocd app rollback rental-app 2

Flux CD can be summed up as a modular tool.

Flux is the engineer's tool: the ultimate in flexibility, configurable in plain text, and capable of being combined into anything you want. It emphasizes declarative configuration and automated synchronization.

Flux CD offers these features:

  • Triggers on Git change
  • auto-apply
  • auto-push notifications to Slack
  • image updates automatically trigger deployment.

Although this can be done in Argo, Flux's modular controllers (e.g. SourceController, KustomizeController) allow us to have fine-grained control over every aspect and build the entire platform like Lego.

Of course, the shortcomings are obvious: 

  • No UI
  • The configuration is all based on YAML
  • Documentation is a little less than Argo, you need to read more official examples.

Practical advice: how to choose in different scenarios?

Scenario 1: Small team, first time with GitOps? Choose Argo CD. 

  • The visualization interface is friendly.
  • Supports manual deployment/rollback. 
  • Low learning cost, easy for the team to accept.

Scenario 2: Strong security compliance needs? Choose Flux CD. 

  • Fully declarative.
  • Scales seamlessly across hundreds of clusters.
  • It can be integrated with GitHub Actions, SOPS, Flagger, etc. to create a powerful CI/CD system.

Scenario 3: You're already using Argo Workflows or Rollouts 

Then, continue to use Argo CD for a better unified ecosystem experience.

The last bit of personal advice 

Don't get hung up on which one to pick; choose one and start using it, that's the most important thing!

I also had a “tool-phobia” at the beginning, but after using it, I realized that GitOps itself is the revolutionary concept, and the tools are just the vehicle. You can start with Argo CD to get started, and then move on to Flux. 

If you're about to design a GitOps process, start with the tool stack you're most familiar with and the capabilities of your team, and then evolve gradually.

by: Abhishek Kumar
Thu, 24 Apr 2025 11:57:47 +0530


When deploying containerized services such as Pi-hole with Docker, selecting the appropriate networking mode is essential for correct functionality, especially when the service is intended to operate at the network level.

The host networking mode allows a container to share the host machine’s network stack directly, enabling seamless access to low-level protocols and ports.

This is particularly critical for applications that require broadcast traffic handling, such as DNS and DHCP services.

This article explores the practical use of host networking mode in docker, explains why bridge mode is inadequate for certain network-wide configurations, and provides a Docker compose example to illustrate correct usage.

What does “Host Network” actually mean?

By default, Docker containers run in an isolated virtual network known as the bridge network. Each container receives an internal IP address (typically in the 172.17.0.0/16 range) and communicates through Network Address Translation (NAT).

docker network list with bridge network highlighted

This setup is well-suited for application isolation, but it limits the container’s visibility to the outside LAN.

For instance, services running inside such containers are not directly reachable from other devices on the local network unless specific ports are explicitly mapped.

In contrast, using host network mode grants the container direct access to the host machine’s network stack.

Rather than using a virtual subnet, the container behaves as if it were running natively on the host's IP address (e.g., 192.168.x.x or 10.1.x.x), as assigned by your router.

It can open ports without needing Docker's ports directive, and it responds to network traffic as though it were a system-level process.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Setting up host network mode using docker compose

While this setup can also be achieved using the docker run command with the --network host flag, I prefer using Docker Compose.

It keeps things declarative and repeatable, especially when you need to manage environment variables, mount volumes, or configure multiple containers together.

Let’s walk through an example config, that runs an nginx container using host network mode:

version: "3"
services:
  web:
    container_name: nginx-host
    image: nginx:latest
    network_mode: host
docker compose file for nginx container

This configuration tells Docker to run the nginx-host container using the host's network stack.

No need to specify ports, if Nginx is listening on port 80, it’s directly accessible at your host's IP address on port 80, without any NAT or port mapping.

Start it up with:

docker compose up -d

Then access it via:

http://192.168.x.x

You’ll get Nginx’s default welcome page directly from your host IP.

nginx welcome page on local network

How is this different from Bridge networking?

By default, Docker containers use the bridge network, where each container is assigned an internal IP (commonly in the 172.17.0.0/16 range).

Here’s how you would configure that:

version: "3"
services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
docker compose file nginx container testing bridge network

This exposes the container’s port 80 to your host’s port 8080.

nginx welcome page on port 8080

The traffic is routed through Docker’s internal bridge interface, with NAT handling the translation. It’s great for isolation and works well for most applications.

Optional: Defining custom bridge network with external reference

In Docker Compose, a user-defined bridge network offers better flexibility and control than the host network, especially when dealing with multiple services.

This allows you to define custom aliasing, service discovery, and isolation between services, while still enabling them to communicate over a single network.

I personally use this with Nginx Proxy Manager that needs to communicate with multiple services.

docker network list highlighting npm network

These are the services that are all connected to my external npm network:

containers list connected to npm network inside my homelab

Let's walk through how you can create and use a custom bridge network in your homelab setup. First, you'll need to create the network using the following command:

docker network create my_custom_network
creating external docker network

Then, you can proceed with the Docker Compose configuration:

version: "3"
services:
  web:
    image: nginx:latest
    networks:
      - hostnet

networks:
  hostnet:
    external: true
    name: hostnet
compose file for nginx container using external network

Explanation:

  • hostnet: This is the name you give to your network inside the Compose file.
  • external: true: This tells Docker Compose to use an existing network, in this case, the network we just created. Docker will not try to create it, assuming it's already available.

By using an external bridge network like this, you can ensure that your services can communicate within a shared network context, but they still benefit from Docker’s built-in networking features, such as automatic service name resolution and DNS, without the potential limitations of the host network.

But... What’s the catch?

Everything has a trade-off, and host networking is no exception. Here’s where things get real:

❌ Security takes a hit

You lose the isolation that containers are famous for. A process inside your container could potentially see or interfere with host-level services.

❌ Port conflicts are a thing

Because your container is now sharing the same network stack as your host, you can’t run multiple containers using the same ports without stepping on each other. With the bridge network, Docker handles this neatly using port mappings. With host networking, it’s all manual.

❌ Not cross-platform friendly

Host networking works only on Linux hosts. If you're on macOS or Windows, it simply doesn’t behave the same way, thanks to how Docker Desktop creates virtual machines under the hood. This could cause consistency issues if your team is split across platforms.

❌ You can’t use some docker features

Things like service discovery (via Docker's DNS) or custom internal networks just won’t work with host mode. You’re bypassing Docker's clever internal network stack altogether.

When to choose which Docker network mode

Here’s a quick idea of when to use what:

  • Bridge Network: Great default. Perfect for apps that just need to run and expose ports with isolation. Works well with Docker Compose and lets you connect services easily using their names.
  • Host Network: Use it when performance or native networking is critical. Ideal for edge services, proxies, or tightly coupled host-level apps.
  • None: There's a network_mode: none too—this disables networking entirely. Use it for highly isolated jobs like offline batch processing or building artifacts.

Wrapping Up

The host network mode in Docker is best suited for services that require direct interaction with the local network.

Unlike Docker's default bridge network, which isolates containers with internal IP addresses, host mode allows a container to share the host's network stack, including its IP address and ports, without any abstraction.

In my own setup, I use host mode exclusively for Pi-hole, which acts as both a DNS resolver and DHCP server for the entire network.

For most other containers, such as web applications, reverse proxies, or databases, the bridge network is more appropriate. It ensures better isolation, security, and flexibility when exposing services selectively through port mappings.

In summary, host mode is a powerful but specialized tool. Use it only when your containerized service needs to behave like a native process on the host system.

Otherwise, Docker’s default networking modes will serve you better in terms of control and compartmentalization.

by: LHB Community
Sun, 20 Apr 2025 12:23:45 +0530


As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation.

What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost?

'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser.

Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync

1. entr

entr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project.

For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time.

Key Features

  • Lightweight, no additional dependencies.
  • Highly customizable
  • Ideal for use in conjunction with scripts or build tools.
  • Linux only.

Installation

All you have to do is type in the following command in the terminal:

sudo apt install -y entr

Usage

Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that:

ls docker-compose.yml | entr -r docker build

Here, -r flag reloads the child process, which is the run command ‘docker build’.

0:00
/0:23

Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code.

ls *.ts | entr bun test
entr usage

2. nodemon

nodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually.

Key Features

  • Monitor file changes and restart Node.js server automatically.
  • Supports JavaScript and TypeScript projects
  • Customize which files and directories to monitor.
  • Supports common web frameworks such as Express, Hapi.

Installation

You can type in a single command in the terminal to install the tool:

npm install -g nodemon

If you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial.

Usage

When you type in the following command, it starts server.js and will automatically restart the server if the file changes.

nodemon server.js
nodemon

3. LiveReload.net

LiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser.

Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server.

Key Features

  • Seamless integration with editors
  • Supports custom trigger conditions to refresh the page
  • Good compatibility with front-end frameworks and static websites.

Usage

livereload

It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes. 

4. fswatch

fswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly.

Key Features

  • Supports cross-platform operation and can be used on Linux and macOS.
  • It can be used with custom scripts to trigger multiple operations.
  • Flexible configuration options to filter specific types of file changes.

Installation

To install it on a Linux distribution, type in the following in the terminal:

sudo apt install -y fswatch

If you have a macOS computer, you can use the command:

brew install fswatch

Usage

You can try typing in the command here:

fswatch -o . | xargs -n1 -I{} make
fswatch

And, then you can chain this command with an entr command for a rich interactive development experience.

ls hellomake | entr -r ./hellomake

The “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver? 

5. Watchexec

Watchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes. 

  Key Features

  • Support cross-platform use (macOS, Linux, Windows).
  • Fast, written in Rust.
  • Lightweight, no complex configuration.

Installation

On Linux, just type in:

sudo apt install watchexec

And, if you want to try it on macOS (via homebrew):

brew install watchexec

You can also download corresponding binaries for your system from the project’s Github releases section.

Usage

All you need to do is just run the command:

watchexec -e py "pytest"

This will run pytests every time a Python file in the current directory is modified.

6. BrowserSync

BrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing.

Key features

  • Cross-browser synchronization.
  • Automatically refreshes multiple devices and browsers.
  • Built-in local development server.

Installation

Considering you have Node.js installed first, type in the following command:

npm i -g browser-sync

Or, you can use:

npx browser-sync

Usage

Here is how the commands for it would look like:

browser-sync start --server --files "/*.css, *.js, *.html"
npx browser-sync start --server --files "/*.css, *.js, *.html"

You can use either of the two commands for your experiments.

browsersync

This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy.

7. watchdog & watchmedo

Watchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action.

Key Features

  • Cross-platform support
  • Provides full flexibility with its Python-based API
  • Includes watchmedo script to hook any CLI application easily

Installation

Install Python first, and then install with pip using the command below:

pip install watchdog

Usage

Type in the following and watch it in action:

watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .
watchdog

This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted.

In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file.

Conclusion

Hot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser.

Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code.

Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: LHB Community
Sat, 19 Apr 2025 15:59:35 +0530


As a Kubernetes engineer, I deal with kubectl almost every day. Pod status, service list, CrashLoopBackOff location, YAML configuration comparison, log view...... are almost daily operations!

But to be honest, in the process of cutting namespaces, manually copying pod names, and scrolling the log again and again, I gradually felt burned out. That is, until I came across KubeTUI — a little tool that made me feel like “getting back on my feet”.

What is KubeTUI

KubeTUI, known as Kubernetes Terminal User Interface, is a Kubernetes visual dashboard that can be used in the terminal. It's not like the traditional kubectl, which lets you memorize and knock out commands, or the Kubernetes Dashboard, which requires a browser, Ingress, and a token to log in to a bunch of configurations.

In a nutshell, it's a tool that lets you happily browse the state of your Kubernetes cluster from your terminal.

Installing KubeTUI

KubeTUI is written in Rust, and you can download its binary releases from Github. Once you do that, you need to set up a Kubernetes environment to build and monitor your application.

Let me show you how that is done, with an example of building a WordPress application.

Setting up the Kubernetes environment

We’ll use K3s to spin up a Kubernetes environment. The steps are mentioned below.

Step 1: Install k3s and run

curl -sfL https://get.k3s.io | sh -

With this single command, k3s will start itself after installation. At later times, you can use the below command to start k3s server. 

sudo k3s server --write-kubeconfig-mode='644'

Here’s a quick explanation of what the command includes :

  • k3s server: It starts the K3s server component, which is the core of the Kubernetes control plane.
  • --write-kubeconfig-mode='644': It ensures that the generated kubeconfig file has permissions that allow the owner to read and write it, and the group and others to only read it. If you start the server without this flag, you need to use sudo for all k3s commands.

Step 2: Check available nodes via kubectl

We need to verify if Kubernetes control plane is actually working before we can make any deployments. You can use the command below to check that:

k3s kubectl get node
kubectl

Step 3: Deploy WordPress using Helm chart (Sample Application)

K3s provides helm integration, which helps manage the Kubernetes application. Simply apply this YAML manifest to spin up WordPress in Kubernetes environment from Bitnami helm chart.

Create a file named wordpress.yaml with the contents:

Content Missing

You can then apply the configuration file to the application using the command:

k3s kubectl apply -f wordpress.yaml

It will take around 2–3 minutes for the whole setup to complete.

Step 4: Launch KubeTUI

To KubeTUI, type in the following command in the terminal.

kubetui
kubetui

Here's what you will see. There are no pods in the default namespace. Let’s switch namespace to wpdev we created earlier by hitting “n”.

change namespaces

How to Use KubeTui

To navigate to different tabs, like switching screens from Pod to Config and Network, you can click with your mouse or press the corresponding number as shown:

kubetui

You can also switch tabs with the keyboard:

kubetui switch tabs

If you need help with Kubetui at any time, press ? to see all the available options.

kubetui help

It integrates a vim-like search mode. To activate search mode, enter /.

Tip for Log filtering 

I discovered an interesting feature to filter logs from multiple Kubernetes resources. For example, say we want to target logs from all pods with names containing WordPress. It will combine logs from both of these pods. We can use the query:

pod:wordpress

You can target different resource types like svc, jobs, deploy, statefulsets, replicasets with the log filtering in place. Instead of combining logs, if you want to remove some pods or container logs, you can achieve it with !pod:pod-to-exclude and !container:container-to-exclude filters.

Conclusion

Working with Kubernetes involves switching between different namespaces, pods, networks, configs, and services. KubeTUI can be a valuable asset in managing and troubleshooting Kubernetes environment. 

I find myself more productive using tools like KubeTUI. Share your thoughts on what tools you’re utilizing these days to make your Kubernetes journey smoother.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: Abhishek Prakash
Mon, 14 Apr 2025 10:58:44 +0530


Lately, whenever I tried accessing a server via SSH, it asked for a passphrase:

Enter passphrase for key '/home/abhishek/.ssh/id_rsa':

Interestingly, it was asking for my local system's account password, not the remote server's.

Entering the account password for SSH key is a pain. So, I fixed it with this command which basically resets the password:

ssh-keygen -p

It then asked for the file which has the key. This is the private ssh key, usually located in .ssh/id_rsa file. I provided the absolute path for that.

Now it asked for the 'old passphrase' which is the local user account password. I provided it one more time and then just pressed enter for the new passphrase.

❯ ssh-keygen -p
Enter file in which the key is (/home/abhishek/.ssh/id_ed25519): /home/abhishek/.ssh/id_rsa
Enter old passphrase: 
Enter new passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved with the new passphrase.

And thus, it didn't ask you to enter passphrase for the SSH private key anymore. Did not even need a reboot or anything.

Wondering why it happened and how it was fixed? Let's go in detail.

What caused 'Enter passphrase for key' issue?

Here is my efficient SSH workflow. I have the same set of SSH keys on my personal systems, so I don't have to create them new and add them to the servers when I install a new distro.

Since the public SSH key is added to the servers, I don't have to enter the root password for the servers every time I use SSH.

And then I have an SSH config file in place that maps the server's IP address with an easily identifiable name. It further smoothens my workflow.

Recently, I switched my personal system to CachyOS. I copied my usual SSH keys from an earlier backup and gave them the right permission.

But when I tried accessing any server, it asked for a passphrase:

Enter passphrase for key '/home/abhishek/.ssh/id_rsa':

No, it was not the remote server's user-password. It asked for my regular, local system's password as if I were using sudo.

I am guessing that some settings somewhere were left untouched and it started requiring a password to unlock the private SSH key.

This is an extra layer of security, and I don't like the inconvenience that comes with it.

One method to use SSH without entering the password each time to unlock is to reset the password on the SSH key.

And that's what you saw at the beginning of this article.

Fixing it by resetting the password on SSH key

Note down the location of your SSH private key. Usually, it is ~/.ssh/id_rsa unless you have multiple SSH key sets for different servers.

Enter the following command to reset the password on an SSH key:

ssh-keygen -p

It will ask you for the path to key. Provide the absolute path to your private SSH key.

Enter file in which the key is (/home/abhishek/.ssh/id_ed25519):

It then asks to enter old passphrase which should your local account's password. The same one that you use for sudo.

Enter old passphrase:

Once you have entered that, it will ask you to enter new passphrase. Keep it empty by pressing the enter key. This way, it won't have any password.

Enter new passphrase (empty for no passphrase):

Press enter key again when it asks:

Enter same passphrase again:

And that's about it.

Reset the password on ssh key to fix the passphrase issue

You can instantly verify it. You don't need to reboot the system or even log out from the terminal.

Enjoy SSH 😄

by: Abhishek Prakash
Fri, 11 Apr 2025 17:22:49 +0530


Linux can feel like a big world when you're just getting started — but you don’t have to figure it all out on your own.

Each edition of LHB Linux Digest brings you clear, helpful articles and quick tips to make everyday tasks a little easier.

Chances are, a few things here will click with you — and when they do, try working them into your regular routine. Over time, those small changes add up and before you know it, you’ll feel more confident and capable navigating your Linux setup.

Here are the highlights of this edition:

  • Running sudo without password
  • Port mapping in Docker
  • Docker log viewer tool
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by Typesense.

❇️ Typesense, Open Source Algolia Alternative

Typesense is the free, open-source search engine for forward-looking devs.

Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.

👉 Check them out on GitHub.

GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…
by: Umair Khurshid
Tue, 08 Apr 2025 12:11:49 +0530


Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.

Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.

This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.

What is port mapping in Docker?

Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.

In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.

Docker port mapping example

How to map ports in Docker

Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.

Port mapping is used to create communication between the container's isolated network and the host system's network.

For example, let's map Nginx to port 80:

docker run -d --publish 8080:80 nginx

The --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).

In this case, to access it, you simply use a web browser and access http://localhost:8080

On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:

docker run -d --publish-all hello-world

Docker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:

Mapping ports with Docker Compose

Docker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive.

version: '3.8'
services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"

In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.

Port mapping vs. exposing

It is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.

services:
  app:
    image: myapp
    expose:
      - "3000"

In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.

Mapping Multiple Ports

You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.

Let's configure a nginx server to work in a dual stack environment:

docker run -p 8080:80 -p 443:443 nginx

Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.

Specifying host IP address for port binding

By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.

docker run -p 192.168.1.100:8080:80 nginx

This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container.

Port range mapping

Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,

docker run -p 5000-5100:5000-5100 nginx

This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.

Using different ports for host and container

In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.

docker run -p 8081:80 nginx

This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.

Binding to UDP ports (if you need that)

By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.

For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.

docker run -p 53:53/udp ubuntu/bind9

Here this command maps UDP port 53 on the host to UDP port 53 inside the container.

Inspecting and verifying port mapping

Once you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.

To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports.

docker ps

This might output something like:

inspecting and verifying port mapping in Docker

If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.

docker inspect <container_id> | grep "Host"

This command will display the port mappings, such as:

Wrapping Up

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

When you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.

by: Team LHB
Mon, 07 Apr 2025 17:16:55 +0530


After years of training DevOps students and taking interviews for various positions, I have compiled this list of Docker interview ques tions (with answers) that are generally asked in the technical round.

I have categorized them into various levels:

  • Entry level (very basic Docker questions)
  • Mid-level (slightly deep in Docker)
  • Senior-level (advanced level Docker knowledge)
  • Common for all (generic Docker stuff for all)
  • Practice Dockerfile examples with optimization challenge (you should love this)

If you are absolutely new to Docker, I highly recommend our Docker course for beginners.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Let's go.

Entry level Docker questions

What is Docker?

Docker is a containerization platform that allows you to package an application and its dependencies into a container. Unlike virtualization, Docker containers share the host OS kernel, making them more lightweight and efficient.

What is Containerization?

It’s a way to package software in a format that can run isolated on a shared OS.

What are Containers?

Containers are packages which contains application with all its needs such as libraries and dependencies

What is Docker image?

  • Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform.
  • It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users.

What is Docker Compose?

It is a tool for defining and running multi-container Docker applications.

What’s the difference between virtualization and containerization?

Virtualization abstracts the entire machine with separate VMs, while containerization abstracts the application with lightweight containers sharing the host OS.

Describe a Docker container’s lifecycle

Create | Run | Pause | Unpause | Start | Stop | Restart | Kill | Destroy

Docker lifecycle

What is a volume in Docker, and which command do you use to create it?

  • A volume in Docker is a persistent storage mechanism that allows data to be stored and accessed independently of the container's lifecycle.
  • Volumes enable you to share data between containers or persist data even after a container is stopped or removed.
docker volume create <volume_name>
TO KNOW : example :- docker run -v data_volume:/var/lib/mysql mysql

What is Docker Swarm?

Docker Swarm is a tool for clustering & managing containers across multiple hosts.

How do you remove unused data in Docker?

Use docker system prune to remove unused data, including stopped containers, unused networks, and dangling images.

Mid-level Docker Questions

What command retrieves detailed information about a Docker container?

Use docker inspect <container_id> to get detailed JSON information about a specific Docker container.

How do the Docker Daemon and Docker Client interact?

The Docker Client communicates with the Docker Daemon through a REST API over a Unix socket or TCP/IP

How can you set CPU and memory limits for a Docker container?

Use docker run --memory="512m" --cpus="1.5" <image> to set memory and CPU limits.

Can a Docker container be configured to restart automatically?

Yes, a Docker container can be configured to restart automatically using restart policies such as --restart always or --restart unless-stopped.

What methods can you use to debug issues in a Docker container?

  • Inspect logs with docker logs <container_id> to view output and error messages.
  • Execute commands interactively using docker exec -it <container_id> /bin/bash to access the container's shell.
  • Check container status and configuration with docker inspect <container_id>.
  • Monitor resource usage with docker stats to view real-time performance metrics.
  • Use Docker's built-in debugging tools and third-party monitoring solutions for deeper analysis.

What is the purpose of Docker Secrets?

Docker Secrets securely manage sensitive data like passwords for Docker services. Use docker secret create <secret_name> <file> to add secrets.

What are the different types of networks in Docker, and how do they differ?

Docker provides several types of networks to manage how containers communicate with each other and with external systems.

Here are the main types:

  • Bridge
  • None
  • Host
  • Overlay Network
  • Macvlan Network
  • IPvlan Network

bridge: This is the default network mode. Each container connected to a bridge network gets its own IP address and can communicate with other containers on the same bridge network using this IP

docker run ubuntu

Useful for scenarios where you want isolated containers to communicate through a shared internal network.

none: Containers attached to the none network are not connected to any network. They don't have any network interfaces except the loopback interface (lo).

docker run ubuntu --network=none

Useful when you want to create a container with no external network access for security reasons.

host: The container shares the network stack of the Docker host, which means it has direct access to the host's network interfaces. There's no isolation between the container and the host network.

docker run ubuntu --network=host

Useful when you need the highest possible network performance, or when you need the container to use a service on the host system.

Overlay Network : Overlay networks connect multiple Docker daemons together, enabling swarm services to communicate with each other. It's used in Docker Swarm mode for multi-host networking.

docker network create -d overlay my_overlay_network

Useful for distributed applications that span multiple hosts in a Docker Swarm.

Macvlan Network : Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on the network. The container can communicate directly with the physical network using its own IP address.

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_macvlan_network

Useful when you need containers to appear as physical devices on the network and need full control over the network configuration.

IPvlan Network: Similar to Macvlan, but uses different methods to route packets. It's more lightweight and provides better performance by leveraging the Linux kernel's built-in network functionalities.

docker network create -d ipvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_ipvlan_network

Useful for scenarios where you need low-latency, high-throughput networking with minimal overhead.

Explain the main components of Docker architecture

Docker consists of the Docker Host, Docker Daemon, Docker Client, and Docker Registry.

  • The Docker Host is the computer (or server) where Docker is installed and running. It's like the home for Docker containers, where they live and run.
  • The Docker Daemon is a background service that manages Docker containers on the Docker Host. It's like the manager of the Docker Host, responsible for creating, running, and monitoring containers based on instructions it receives.
  • The Docker Client communicates with the Docker Daemon, which manages containers.
  • The Docker Registry stores and distributes Docker images.

How does a Docker container differ from an image?

A Docker image is a static, read-only blueprint, while a container is a running instance of that image. Containers are dynamic and can be modified or deleted without affecting the original image.

Explain the purpose of a Dockerfile.

Dockerfile is a script containing instructions to build a Docker image. It specifies the base image, sets up the environment, installs dependencies, and defines how the application should run.

How do you link containers in Docker?

Docker provides network options to enable communication between containers. Docker Compose can also be used to define and manage multi-container applications.

How can you secure a Docker container?

Container security involves using official base images, minimizing the number of running processes, implementing least privilege principles, regularly updating images, and utilizing Docker Security Scanning tools. ex. Docker vulnerability scanning.

Difference between ARG & ENV?

  • ARG is for build-time variables, and its scope is limited to the build process.
  • ENV is for environment variables, and its scope extends to both the build process and the running container.

Difference between RUN, ENTRYPOINT & CMD?

  • RUN : Executes a command during the image build process, creating a new image layer.
  • ENTRYPOINT : Defines a fixed command that always runs when the container starts. Note : using --entrypoint we can overridden at runtime.
  • CMD : Specifies a default command or arguments that can be overridden at runtime.

Difference between COPY & ADD?

  • If you are just copying local files, it's often better to use COPY for simplicity.
  • Use ADD when you need additional features like extracting compressed archives or pulling resources from URLs.

How do you drop the MAC_ADMIN capability when running a Docker container?

Use the --cap-drop flag with the docker run command:

docker run --cap-drop MAC_ADMIN ubuntu

How do you add the NET_BIND_SERVICE capability when running a Docker container?

Use the --cap-drop flag with the docker run command:

docker run --cap-add NET_BIND_SERVICE ubuntu

How do you run a Docker container with all privileges enabled?

Use the --privileged flag with the docker run command:

docker run --privileged ubuntu
by: Abhishek Prakash
Fri, 28 Mar 2025 18:10:14 +0530


Welcome to the latest edition of LHB Linux Digest. I don't know if you have noticed but I have changed the newsletter day from Wednesday to Friday so that you can enjoy your Fridays learning something new and discovering some new tool. Enjoy 😄

Here are the highlights of this edition :

  • Creating a .deb package from Python app
  • Quick Vim tip on indentation
  • Pushing Docker image to Hub
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️ Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.