Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

  • Entries

    91
  • Comments

    0
  • Views

    4239

Entries in this blog

by: Abhishek Prakash
Fri, 16 May 2025 17:00:52 +0530


In the previous edition, I asked your opinion on the frequency of the newsletters. Out of the all the responses I got, 76% members want it on a weekly basis.

Since we live in a democratic world, I'll go with the majority here. I hope the rest 24% won't mind seeing the emails once each week ;)

Here are the highlights of this edition :

  • TCP Proxy with socat
  • Out of memory killer explained
  • Nerdlog for better log viewing
  • And regular dose of tips, tutorials and memes

🚀 Elevate Your DevOps Career – Up to 50% OFF!

Linux Foundation Sale

This May, the Linux Foundation is offering 50% off on certifications with THRIVE Annual Subscriptions, 40% off on training courses, and 10% off on THRIVE access.

Top Bundles:

  • LFCS + THRIVE — Master Linux Administration
  • CKA + THRIVE — Become a Kubernetes Pro
  • CKAD + THRIVE — Level up Kubernetes Development
  • CKS + THRIVE — Specialize in Kubernetes Security

Offer ends May 20, 2025!

by: LHB Community
Thu, 15 May 2025 15:44:09 +0530


A TCP proxy is a simple but powerful tool that sits between a client and a server and is responsible for forwarding TCP traffic from one location to another. It can be used to redirect requests or provide access to services located behind a firewall or NAT. socat is a handy utility that lets you establish bidirectional data flow between two endpoints. Let's see how you can use it to set up a TCP proxy.

A lightweight and powerful TCP proxy tool is socat (stands for "SOcket CAT)". It establishes a bidirectional data flow between two endpoints. These endpoints can be of many types, such as TCP, UDP, UNIX sockets, files, and even processes.

As a former developer and sysadmin, I can't count the number of times I've used socat, and it's often saved me hours of troubleshooting.🤯

Whether it's testing a service behind the company firewall, redirecting traffic between local development environments, or simply trying to figure out why one container isn't communicating with another. It's one of those tools that, once you understand what it can do, is amazing. How many problems can be solved with just one line of command?

In this tutorial, you will learn how to build a basic TCP proxy using socat. By the end of the tutorial, you'll have a working configuration that listens on a local port and forwards incoming traffic to a remote server or service. This is a fast and efficient way to implement traffic proxying without resorting to more complex tools.

Let's get started!

Prerequisites

This tutorial assumes you have a basic knowledge of TCP/IP networks. 

# Debian/Ubuntu
sudo apt-get install socat
# macOS (Homebrew)
brew install socat

Understanding the basic socat command syntax

Here’s the basic socat syntax:

socat <source> <destination>

These addresses can be in the following format:

  • TCP4-LISTEN:<port>
  • TCP4-<host>:<port>

The point is: all you have to do is tell socat “where to receive the data from” and “where to send the data to,” and it will automatically do the forwarding in both directions.

Setting up a basic TCP proxy

Let’s say you have a TCP server working on localhost (loopback interface). Maybe some restrictions prevent you from modifying the application to launch it on a different interface. Now, there’s a scenario where you need to access the service from another machine in the LAN network. Socat comes to the rescue.

Example 1: Tunneling Android ADB

First, we established a connection with the Android device via ADB, and then we restart the adb daemon in TCP/IP mode.

adb devices
adb tcpip 5555

On some devices, running this adb tcpip 5555 command will expose the service on LAN interface, but in my setup, it doesn’t. So, I decided to use Socat.

socat tcp4-listen:5555,fork,reuseaddr,bind=192.168.1.33 tcp4:localhost:5555

A quick reminder, your LAN IP would be different, so adjust the bind value accordingly. You can check all your IPs via ifconfig.

using adb for proxy

Example 2: Python server

We’ll use Python to start a TCP server on the loopback interface just for demonstration purposes. In fact, it will start an HTTP server and serve the contents of the current directory, but under the hood, HTTP is a TCP connection.

🚧
Start this command from a non-sensitive directory.
python -m http.server --bind 127.0.0.1

This starts an HTTP server on port 8000 by default. Now, let’s verify by opening localhost:8000 in the browser or using a curl request.

curl http://localhost:8000
localhost connection

What if we do curl for the same port, but this time for the IP assigned by the LAN? It’s not working, right?

localhost connection failed
socat tcp4-listen:8005,fork,reuseaddr,bind=192.168.1.33 tcp4:localhost:8000

Now, establish the connection on port 8005.

localhost connection success

When establishing a connection through the different devices to http://192.168.1.33:8005, you might get a connection refused error because of firewall rules. You can add a firewall rule to access the service in that case.

You can refer to our tutorial on using UFW to manage firewall for more details. Here are the commands to do the job quickly:

sudo ufw allow 8005/tcp
sudo ufw status
ufw firewall add

Conclusion

Whether you are proxying between containers or opening services on different ports, socat proves to be a versatile and reliable tool. If you need a quick and easy proxy setup, give it a try — you'll be amazed at how well it integrates with your workflow.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: LHB Community
Mon, 12 May 2025 10:43:21 +0530


Automating tasks is great, but what's even better is knowing when they're done or if they've gotten derailed.

Slack is a popular messaging tool used by many techies. And it supports bots that you can configure to get automatic alerts about things you care about.

Web server is down? Get an alert. Shell script completes running? Get an alert.

Yes, that could be done too. By adding Slack notifications to your shell scripts, you can share script outcomes with your team effortlessly and respond quickly to issues and stay in the loop without manual checks. It lets you monitor automated tasks without constantly checking logs. 

🚧
I am assuming you already use Slack and you have a fair idea about what a Slack Bot is. Of course, you should have at least basic knowledge of Bash scripting.

The Secret Sauce: curl and Webhooks

The magic behind delivering Slack notifications from shell scripts is Slack's Incoming Webhooks and the curl command line tool.

Basically, everything is already there for you to use, it just needs some setup for connections. I found it pretty easy, and I'm sure you will too.

Here are the details for what webhooks and the command line tool is for:  

  • Incoming Webhooks: Slack allows you to create unique Webhook URLs for your workspace that serve as endpoints for sending HTTP POST requests containing messages.  
  • curl: This powerful command-line tool is great for making HTTP requests. We'll use it to send message-containing JSON payloads to Slack webhook URLs.

Enabling webhooks on Slack side

  1. Create a Slack account (if you don't have it already) and (optionally) create a Slack workspace for testing.
  2. Go to api.slack.com/apps and create a new app.
create a new slack app
  1. Open the application and, under the “Features” section, click on “Incoming Webhooks” and “Activate Incoming Webhooks”.
slack webhook activate
  1. Under the same section, scroll to the bottom. You’ll find a button “Add New Webhook to Workspace”. Click on it and add the channel.
webhook connection to workspace
  1. Test the sample CURL request. 

Important: The CURL command you see above also has the webhook URL. Notice that https://hooks.slack.com/services/xxxxxxxxxxxxx things? Note it down.

Sending Slack notifications from shell scripts

Set SLACK_WEBHOOK_URL environment variable in your .bashrc file as shown below.

webhook url
Use the webhook URL you got from Slack in the previous step

Create a new file, notify_slack.sh, under your preferred directory location.

# Usage: notify_slack "text message"
# Requires: SLACK_WEBHOOK_URL environment variable to be set
notify_slack() {
    local text="$1"
    curl -s -X POST -H 'Content-type: application/json' \
        --data "{\"text\": \"$text\"}" \
        "$SLACK_WEBHOOK_URL"
}

Now, you can simply source this bash script wherever you need to notify Slack. I created a simple script to check disk usage and CPU load.

source ~/Documents/notify_slack.sh 
disk_usage=$(df -h / | awk 'NR==2 {print $5}')
# Get CPU load average
cpu_load=$(uptime | awk -F'load average:' '{ print $2 }' | cut -d',' -f1 | xargs)
hostname=$(hostname)
message="*System Status Report - $hostname*\n* Disk Usage (/): $disk_usage\n* CPU Load (1 min): $cpu_load"
# Send the notification
notify_slack "$message"

Running this script will post a new message on the Slack channel associated with the webhook.

Slack notifications from bash shell

Best Practices 

It is crucial to think about security and limitations when you are integrating things, no matter how insignificant you think it is. So, to avoid common pitfalls, I recommend you to follow these two tips:

  • Avoid direct hardware encoding in publicly shared scripts. Consider using environment variables or configuration files.
  • Be aware of Slack's rate limitation for incoming webhooks, especially if your scripts may trigger notifications frequently. You may want to send notifications only in certain circumstances (for example, only on failure or only for critical scripts).

Conclusion

What I shared here was just a simple example. You can utilize cron in the mix and periodically send notifications about server stats to Slack. You put in some logic to get notified when disk usage reaches a certain stage.

There can be many more use cases and it is really up to you how you go about using it. With the power of Incoming Webhooks and curl, you can easily deliver valuable information directly to your team's communication center. Happy scripting!

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: Abhishek Prakash
Fri, 09 May 2025 20:17:53 +0530


In the past few months, some readers have requested to increase the frequency of the newsletter to weekly, instead of bi-monthly.

What do you think? Are you happy with the current frequency, or do you want these emails each week?

Also, what would you like to see more? Linux tips, devops tutorials or lesser known tools?

Your feedback will shape this newsletter. Just hit the reply button. I read and answer to each of them.

Here are the highlights of this edition :

  • TaskCrafter: A YAML-based task scheduler
  • Docker logging guide
  • cdd command (no, that's not a typo)
  • This edition of LHB Linux Digest is supported by ANY.RUN.

🎫 Free Webinar | How SOC Teams Save Time with ANY.RUN: Action Plan

Trusted by 15,000+ organizations, ANY.RUN knows how to solve SOC challenges. Join team leads, managers, and security pros to learn expert methods on how to:  

  • Increase detection of complex attacks  
  • Speed up alert & incident response  
  • Improve training & team coordination  

Book your seat for the webinar here.

How SOC Teams Save Time and Effort with ANY.RUN: Action Plan
Discover expert solutions for SOC challenges, with hands-on lessons to improve detection, triage, and threat visibility with ANY.RUN.
by: LHB Community
Tue, 06 May 2025 18:08:50 +0530


Anyone who works in a terminal, Linux or Windows, all the time knows that one of the most frequently used Linux commands is "cd" (change directory).

Many people have come up with tools to change the current directory intuitively. Some people use the CDPATH environment variable while some go with zoxide, but which doesn't suit my needs.

So I created a tool that works for me as a better alternative to the cd command.

Here's the story.

Why did I build a cd command alternative?

In my daily work, I've used the cd command a few dozen times (that's about the order of magnitude). I've always found it annoying to have to retype the same paths over and over again, or to search for them in the history.

By analyzing my use of “cd” and my command history, I realized that I was most often moving through fifty or so directories, and that they were almost always the same.

Below is the command I used, which displays the number of times a specific directory is the target of a “cd” command:

history | grep -E '^[ ]*[0-9]+[ ]+cd ' | awk '{print $3}' | sort | uniq -c | sort -nr

Here's how it works step by step:

  1. history: Lists your command history with line numbers
  2. grep -E '^[ ]*[0-9]+[ ]+cd ': Filters only lines that contain the cd command (with its history number)
  3. awk '{print $3}': Extracts just the directory path (the 3rd field) from each line
  4. sort: Alphabetically sorts all the directory paths
  5. uniq -c: Counts how many times each unique directory appears
  6. sort -nr: Sorts the results numerically in reverse order (highest count first)

The end result is a frequency list showing which directories you've changed to most often, giving you insights into your most commonly accessed directories.

The above command won't work if you have timestamp enabled in command history.

From this observation, I thought, why not use a mnemonic shortcut to access the most used directories.

So that's what I did, first for the Windows terminal, years ago, quickly followed by a port to Linux.

Meet cdd

Today cdd is the command I use the most in a console. Simple and very efficient.

GitHub - gsinger/cdd: Yet another tool to change current directory efficiently
Yet another tool to change current directory efficiently - gsinger/cdd

With cdd, you can:

  • Jump to a saved directory by simply typing its shortcut.
  • Bind any directory to a shortcut for later use.
  • View all your pre-defined shortcuts along with their directory paths.
  • Delete any shortcut that you no longer need.
0:00
/1:01

Installing cdd

The source is available here.

The cdd_run file can be copied anywhere in your system. Don't forget to make it executable (chmod +x ./cdd_run)

Because the script changes the current directory, it cannot be launched in a different bach process from your current session. It must be launched by the source command. Just add the alias in your ~/.bashrc file:

alias cdd='source ~/cdd_run'

Last step: Restart your terminal (or run source ~/.bashrc).

Running cdd without argument displays the usage of the tool.

In the end...

I wanted a short name that was not too far from "cd". My muscle memory is so used to "cd" that adding just a 'd' was the most efficient in terms of speed.

I understand that cdd may not be a tool for every Linux user. It's a tool for me, created by for my needs, and I think there might be a few people out there who would like it as much as I do.

So, are you going to be one of them? Please let me know in the comments.

This article has been contributed by Guillaume Singer, developer of the cdd command.

by: Pranav Krishna
Tue, 29 Apr 2025 09:53:03 +0530


In this series of managing the tmux utility, the first level division, panes, are considered.

Panes divide the terminal window horizontally or vertically. Various combinations of these splits can result in different layouts, according to your liking.

Tmux window splitting into panes
Pane split of a tmux window

This is how panes work in tmux.

Creating Panes

Take into focus any given pane. It could be a fresh window as well.

The current window can be split horizontally (up and down) with the key

[Ctrl+B] + "
horizontal split
Horizontal Split

And to split the pane vertically, use the combination

[Ctrl+B] + %
vertical split
Vertical Split

Resizing your panes

Tmux uses 'cells' to quantify the amount of resizing done at once. To quantify, this is what resizing by 'one cell' looks like. One more character can be accommodated on the side.

resize by one cell
Resizing by 'one cell'

The combination part is a bit tricky for resizing. Stick with me.

Resize by one cell

Use the prefix Ctrl+B followed by Ctrl+arrow keys to resize in the required direction.

[Ctrl+B] Ctrl+arrow

This combination takes a fair number of keypresses, but can be precise.

0:00
/0:08

Resize by five cells (quicker)

Instead of holding the Ctrl key, you could use the Alt key to resize faster. This moves the pane by five cells.

[Ctrl+B] Alt+arrow
0:00
/0:12

Resize by a specific number of cells (advanced)

Just like before, the command line options can resize the pane to any number of cells.

Enter the command line mode with

[Ctrl+B] + :

Then type

resize-pane -{U/D/L/R} xx
  • U/D/L/R represents the direction of resizing
  • xx is the number of cells to be resized

To resize a pane left by 20 cells, this is the command:

resize-pane -L 20
0:00
/0:06

Resizing left by 20 cells

Similarly, to resize a pane upwards, the -U tag is used instead.

0:00
/0:05

Resizing upwards by 15 cells

This resize-pane command could be primarily incorporated into reprogramming a tmux layout whenever a new session is spawned.

Conclusion

Since the pane lengths are always bound to change, knowing all the methods to vary the pane sizes can come in handy. Hence, all possible methods are covered.

Pro tip 🚀 - If you make use of a mouse with tmux, your cursor is capable of resizing the panes.

0:00
/0:15

Turning on mouse mode and resizing the panes

Go ahead and tell me which method you use in the comments.

by: Abhishek Prakash
Fri, 25 Apr 2025 21:30:04 +0530


Choosing the right tools is important for an efficient workflow. A seasoned Fullstack dev shares his favorites.

7 Utilities to Boost Development Workflow Productivity
Here are a few tools that I have discovered and use to improve my development process.

Here are the highlights of this edition :

  • The magical CDPATH
  • Using host networking with docker compose
  • Docker interview questions
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️ Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
by: Abhishek Prakash
Fri, 25 Apr 2025 20:55:16 +0530


If you manage servers on a regular basis, you'll often find yourself entering some directories more often than others.

For example, I self-host Ghost CMS to run this website. The Ghost install is located at /var/www/ghost/ . I have to cd to this directory and then use its subdirectories to manage the Ghost install. If I have to enter its log directory directly, I have to type /var/www/ghost/content/log.

Typing out ridiculously long paths that take several seconds even with tab completion.

Relatable? But what if I told you there's a magical shortcut that can make those lengthy directory paths vanish like free merchandise at a tech conference?

Enter CDPATH, the unsung hero of Linux navigation that I'm genuinely surprised that many new Linux users are not even aware of!

What is CDPATH?

CDPATH is an environment variable that works a lot like the more familiar PATH variable (which helps your shell find executable programs). But instead of finding programs, CDPATH helps the cd command find directories

Normally, when you use cd some-dir, the shell looks for some-dir only in the current working directory.

With CDPATH, you tell the shell to also look in other directories you define. If it finds the target directory there, it cds into it — no need to type full paths.

How does CDPATH works?

Imagine this directory structure:

/home/abhishek/
├── Work/
│   └── Projects/
│       └── WebApp/
├── Notes/
└── Scripts/

Let's say, I often visit the WebApp directory and for that I'll have to type the absolute path if I am at a strange location:

cd /home/abhishek/Work/Projects/WebApp

Or, since I am a bit smart, I'll use ~ shortcut for home directory.

cd ~/Work/Projects/WebApp

But if I add this location to the CDPATH variable:

export CDPATH=$HOME/Work/Projects

I could enter WebApp directory from anywhere in the filesystem just by typing this:

cd WebApp

Awesome! Isn't it?

🚧
You should always add . (current directory) in the CDPATH and your CDPATH should start with it. This way, it will look for the directory in the current directory first and then in the directories you have specified in the CDPATH variable.

How to set CDPATH variable?

Setting up CDPATH is delightfully straightforward. If you ever added anything to the PATH variable, it's pretty much the same.

First, think about the frequently used directories where you would want to cd to search for when no specific paths have been provided.

Let's say, I want to add /home/abhishek/work and /home/abhishek/projects in CDPATH. I would use:

export CDPATH=.:/home/abhishek/work:/home/abhishek/projects

This creates a search path that includes:

  1. The current directory (.)
  2. My work directory
  3. My projects directory

Which means if I type cd some_dir, it will first look if some_dir exists in the current directory. If not found, it searches

🚧
The order of the directories in CDPATH matters.

Let's say that both work and projects directories have a directory named docs which is not in the current directory.

If I use cd docs, it will take me to /home/abhishek/work/docs. Why? because work directory comes first in the CDPATH.

💡
If things look fine in your testing, you should make it permanent by adding the "export CDPATH" command you used earlier to your shell profile.

Whatever you exported in CDPATH will only be valid for the current session. To make the changes permanent, you should add it to your shell profile.

I am assuming that you are using bash shell. In that case, it should be /.profile~ or ~/.bash_profile.

Open this file with a text editor like Nano and add the CDPATH export command to the end.

📋
When you use cd command with absolute path or relative path, it won't refer to the CDPATH. CDPATH is more like, hey, instead of just looking into my current sub-directories, search it in specified directories, too. When you specify the full path (absolute or relative) already with cd, there is no need to search. cd knows where you want to go.

How to find the CDPATH value?

CDPATH is an environment variable. How do you print the value of an environment variable? Simplest way is to use the echo command:

echo $CDPATH
📋
If you have tab completion set with cd command already, it will also work for the directories listed in CDPATH.

When not to use CDPATH?

Like all powerful tools, CDPATH comes with some caveats:

  1. Duplicate names: If you have identically named directories across your filesystem, you might not always land where you expect.
  2. Scripts: Be cautious about using CDPATH in scripts, as it might cause unexpected behavior. Scripts generally should use absolute paths for clarity.
  3. Demo and teaching: When working with others who aren't familiar with your CDPATH setup, your lightning-fast navigation might look like actual wizardry (which is kind of cool to be honest) but it could confuse your students.
💡
Including .. (parent directory) in your CDPATH creates a super-neat effect: you can navigate to 'sibling directories' without typing ../. If you're in /usr/bin and want to go to /usr/lib, just type cd lib.

Why aren’t more sysadmins using CDPATH in 2025?

The CDPATH used to be a popular tool in the 90s, I think. Ask any sysadmin older than 50 years, and CDPATH would have been in their arsenal of CLI tools.

But these days, many Linux users have not even heard of the CDPATH concept. Surprising, I know.

Ever since I discovered CDPATH, I have been using it extensively specially on the Ghost and Discourse servers I run. Saves me a few keystrokes and I am proud of those savings.

By the way, if you don't mind including 'non-standard' tools in your workflow, you may also explore autojump instead of CDPATH.

GitHub - wting/autojump: A cd command that learns - easily navigate directories from the command line
A cd command that learns - easily navigate directories from the command line - wting/autojump

🗨️ Your turn. Were you already familiar with CDPATH? If yes, how do you use it? If not, is this something you are going to use in your workflow?

by: Ankush Das
Fri, 25 Apr 2025 10:58:48 +0530


As an engineer who has been tossing around Kubernetes in a production environment for a long time, I've witnessed the evolution from manual kubectl deployment to CI/CD script automation, to today's GitOps. In retrospect, GitOps is really a leap forward in the history of K8s Ops.

Nowadays, the two hottest players in GitOps tools are Argo CD and Flux CD, both of which I've used in real projects. So I'm going to talk to you from the perspective of a Kubernetes engineer who has stepped in the pits: which one is better for you?

Why GitOps?

The essence of GitOps is simple: 

“Manage your Kubernetes cluster with Git, and make Git the sole source of truth.”

This means: 

  • All deployment configurations are written in Git repositories
  • Tools automatically detect changes and deploy updates
  • Git revert if something goes wrong, and everything is back to normal
  • More reliable for auditing and security.

I used to maintain a game service, and in the early days, I used scripts + CI/CD tools to do deployment. Late one night, something went wrong, and a manual error pushed an incorrect configuration into the cluster, and the whole service hung. Since I started using GitOps, I haven't had any more of these “man-made disasters”.

Now, let me start comparing Argo CS vs Flux CD.

Installation & Setup

Argo CD can be installed with a single YAML, and the UI and API are deployed together out of the box.

Here are the commands that make it happen:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
argo cd installation

Flux CD follows a modular architecture, you need to install Source Controller, Kustomize Controller, etc., separately. You can also simplify the process by flux install.

curl -s https://fluxcd.io/install.sh | sudo bash
flux --version
flux install --components="source-controller,kustomize-controller"
kubectl get pods -n flux-system
flux cd installation

For me, the winner here is: Argo CD (because of more things out of the box in a single install setup).

Visual Interface (UI) 

argo cd ui
Argo CD UI

Argo CD has a powerful built-in Web UI to visually display the application structure, compare differences, synchronize operations, etc.

Unfortunately, Flux CD has no UI by default. It can be used with Weave GitOps or Grafana to check the status because it relies on the command line primarily.

Again, winner for me: Argo CD, because of a web UI.

Synchronization and Deployment Strategies 

Argo CD supports manual synchronization, automatic synchronization, and forced synchronization, suitable for fine-grained control.

Flux CD uses a fully automated synchronization strategy that polls Git periodically and automatically aligns the cluster state.

Flux CD gets the edge here and is the winner for me.

Toolchain and Integration Capabilities 

Argo CD supports Helm, Kustomize, Jsonnet, etc. and can be extended with plugins.

Flux CD supports Helm, Kustomize, OCI mirroring, SOPS encryption configuration, GitHub Actions, etc., the ecology is very rich.

Flux CD is the winner here for its wide range of integration support.

Multi-tenancy and Privilege Management 

Argo CD has built-in RBAC, supports SSOs such as OIDC, LDAP, and fine-grained privilege assignment.

Flux CD uses Kubernetes' own RBAC system, which is more native but slightly more complex to configure.

If you want ease of use, the winner is Argo CD.

Multi-Cluster Management Capabilities 

Argo CD supports multi-clustering natively, allowing you to switch and manage applications across multiple clusters directly in the UI.

Flux CD also supports it, but you need to manually configure bootstrap and GitRepo for multiple clusters via GitOps. 

Winner: Argo CD 

Security and Keys 

Argo CD is usually combined with Sealed Secrets, Vault, or through plugins to realize SOPS. 

Flux CD supports native integration for SOPS, just configure it once, and it's very easy to decrypt automatically.

Personally, I prefer to use Flux + SOPS in security-oriented scenarios, and the whole key management process is more elegant.

Performance and Scalability 

Flux CD controller architecture naturally supports horizontal scaling with stable performance for large-scale environments.

Argo CD features a centralized architecture, feature-rich but slightly higher resource consumption.

Winner: Flux CD 

Observability and Problem Troubleshooting 

Real-time status, change history, diff comparison, synchronized logs, etc. are available within the Argo CD UI.

Flux CD relies more on logs and Kubernetes Events and requires additional tools to assist with visualization.

Winner: Argo CD 

Learning Curve 

Argo CD UI is intuitive and easy to install, suitable for GitOps newcomers to get started.

Flux CD focuses more on CLI operations and GitOps concepts, and has a slightly higher learning curve.

Argo CD is easy to get started.

GitOps Principles 

Flux CD follows GitOps principles 100%: all declarative configurations, cluster auto-aligning Git. 

Argo CD supports manual operations and UI synchronization, leaning towards "Controlled GitOps".

While Argo CD has a lot of goodies, if you are a stickier for principles, then Flux CD will be more appealing to you.

Final Thoughts

Argo CD can be summed up as, quick to get started, comes with a web interface

Seriously, the first time I used Argo CD, I had a feeling of “relief”.

After deployment, you can open the web UI and see the status of each application, deploy with one click, rollback, compare Git and cluster differences - for people like me who are used to kubectl get, it's like a boon for information overload.

Its “App of Apps ”model is also great for organizing large configurations. For example, I use Argo to manage different configuration repos in multiple environments (dev/stage/prod), which is very intuitive.

On the downside, it's a bit “heavy”. It has its API server, UI, Controller, which takes up a bit of resources.

You have to learn its Application CRD if you want to adjust the configuration. Argo CD even provides CLI for application management and cluster automation.

Here are the commands that can come in handy for the purpose stated above:

argocd app sync rental-app
argocd app rollback rental-app 2

Flux CD can be summed up as a modular tool.

Flux is the engineer's tool: the ultimate in flexibility, configurable in plain text, and capable of being combined into anything you want. It emphasizes declarative configuration and automated synchronization.

Flux CD offers these features:

  • Triggers on Git change
  • auto-apply
  • auto-push notifications to Slack
  • image updates automatically trigger deployment.

Although this can be done in Argo, Flux's modular controllers (e.g. SourceController, KustomizeController) allow us to have fine-grained control over every aspect and build the entire platform like Lego.

Of course, the shortcomings are obvious: 

  • No UI
  • The configuration is all based on YAML
  • Documentation is a little less than Argo, you need to read more official examples.

Practical advice: how to choose in different scenarios?

Scenario 1: Small team, first time with GitOps? Choose Argo CD. 

  • The visualization interface is friendly.
  • Supports manual deployment/rollback. 
  • Low learning cost, easy for the team to accept.

Scenario 2: Strong security compliance needs? Choose Flux CD. 

  • Fully declarative.
  • Scales seamlessly across hundreds of clusters.
  • It can be integrated with GitHub Actions, SOPS, Flagger, etc. to create a powerful CI/CD system.

Scenario 3: You're already using Argo Workflows or Rollouts 

Then, continue to use Argo CD for a better unified ecosystem experience.

The last bit of personal advice 

Don't get hung up on which one to pick; choose one and start using it, that's the most important thing!

I also had a “tool-phobia” at the beginning, but after using it, I realized that GitOps itself is the revolutionary concept, and the tools are just the vehicle. You can start with Argo CD to get started, and then move on to Flux. 

If you're about to design a GitOps process, start with the tool stack you're most familiar with and the capabilities of your team, and then evolve gradually.

by: Abhishek Kumar
Thu, 24 Apr 2025 11:57:47 +0530


When deploying containerized services such as Pi-hole with Docker, selecting the appropriate networking mode is essential for correct functionality, especially when the service is intended to operate at the network level.

The host networking mode allows a container to share the host machine’s network stack directly, enabling seamless access to low-level protocols and ports.

This is particularly critical for applications that require broadcast traffic handling, such as DNS and DHCP services.

This article explores the practical use of host networking mode in docker, explains why bridge mode is inadequate for certain network-wide configurations, and provides a Docker compose example to illustrate correct usage.

What does “Host Network” actually mean?

By default, Docker containers run in an isolated virtual network known as the bridge network. Each container receives an internal IP address (typically in the 172.17.0.0/16 range) and communicates through Network Address Translation (NAT).

docker network list with bridge network highlighted

This setup is well-suited for application isolation, but it limits the container’s visibility to the outside LAN.

For instance, services running inside such containers are not directly reachable from other devices on the local network unless specific ports are explicitly mapped.

In contrast, using host network mode grants the container direct access to the host machine’s network stack.

Rather than using a virtual subnet, the container behaves as if it were running natively on the host's IP address (e.g., 192.168.x.x or 10.1.x.x), as assigned by your router.

It can open ports without needing Docker's ports directive, and it responds to network traffic as though it were a system-level process.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Setting up host network mode using docker compose

While this setup can also be achieved using the docker run command with the --network host flag, I prefer using Docker Compose.

It keeps things declarative and repeatable, especially when you need to manage environment variables, mount volumes, or configure multiple containers together.

Let’s walk through an example config, that runs an nginx container using host network mode:

version: "3"
services:
  web:
    container_name: nginx-host
    image: nginx:latest
    network_mode: host
docker compose file for nginx container

This configuration tells Docker to run the nginx-host container using the host's network stack.

No need to specify ports, if Nginx is listening on port 80, it’s directly accessible at your host's IP address on port 80, without any NAT or port mapping.

Start it up with:

docker compose up -d

Then access it via:

http://192.168.x.x

You’ll get Nginx’s default welcome page directly from your host IP.

nginx welcome page on local network

How is this different from Bridge networking?

By default, Docker containers use the bridge network, where each container is assigned an internal IP (commonly in the 172.17.0.0/16 range).

Here’s how you would configure that:

version: "3"
services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
docker compose file nginx container testing bridge network

This exposes the container’s port 80 to your host’s port 8080.

nginx welcome page on port 8080

The traffic is routed through Docker’s internal bridge interface, with NAT handling the translation. It’s great for isolation and works well for most applications.

Optional: Defining custom bridge network with external reference

In Docker Compose, a user-defined bridge network offers better flexibility and control than the host network, especially when dealing with multiple services.

This allows you to define custom aliasing, service discovery, and isolation between services, while still enabling them to communicate over a single network.

I personally use this with Nginx Proxy Manager that needs to communicate with multiple services.

docker network list highlighting npm network

These are the services that are all connected to my external npm network:

containers list connected to npm network inside my homelab

Let's walk through how you can create and use a custom bridge network in your homelab setup. First, you'll need to create the network using the following command:

docker network create my_custom_network
creating external docker network

Then, you can proceed with the Docker Compose configuration:

version: "3"
services:
  web:
    image: nginx:latest
    networks:
      - hostnet

networks:
  hostnet:
    external: true
    name: hostnet
compose file for nginx container using external network

Explanation:

  • hostnet: This is the name you give to your network inside the Compose file.
  • external: true: This tells Docker Compose to use an existing network, in this case, the network we just created. Docker will not try to create it, assuming it's already available.

By using an external bridge network like this, you can ensure that your services can communicate within a shared network context, but they still benefit from Docker’s built-in networking features, such as automatic service name resolution and DNS, without the potential limitations of the host network.

But... What’s the catch?

Everything has a trade-off, and host networking is no exception. Here’s where things get real:

❌ Security takes a hit

You lose the isolation that containers are famous for. A process inside your container could potentially see or interfere with host-level services.

❌ Port conflicts are a thing

Because your container is now sharing the same network stack as your host, you can’t run multiple containers using the same ports without stepping on each other. With the bridge network, Docker handles this neatly using port mappings. With host networking, it’s all manual.

❌ Not cross-platform friendly

Host networking works only on Linux hosts. If you're on macOS or Windows, it simply doesn’t behave the same way, thanks to how Docker Desktop creates virtual machines under the hood. This could cause consistency issues if your team is split across platforms.

❌ You can’t use some docker features

Things like service discovery (via Docker's DNS) or custom internal networks just won’t work with host mode. You’re bypassing Docker's clever internal network stack altogether.

When to choose which Docker network mode

Here’s a quick idea of when to use what:

  • Bridge Network: Great default. Perfect for apps that just need to run and expose ports with isolation. Works well with Docker Compose and lets you connect services easily using their names.
  • Host Network: Use it when performance or native networking is critical. Ideal for edge services, proxies, or tightly coupled host-level apps.
  • None: There's a network_mode: none too—this disables networking entirely. Use it for highly isolated jobs like offline batch processing or building artifacts.

Wrapping Up

The host network mode in Docker is best suited for services that require direct interaction with the local network.

Unlike Docker's default bridge network, which isolates containers with internal IP addresses, host mode allows a container to share the host's network stack, including its IP address and ports, without any abstraction.

In my own setup, I use host mode exclusively for Pi-hole, which acts as both a DNS resolver and DHCP server for the entire network.

For most other containers, such as web applications, reverse proxies, or databases, the bridge network is more appropriate. It ensures better isolation, security, and flexibility when exposing services selectively through port mappings.

In summary, host mode is a powerful but specialized tool. Use it only when your containerized service needs to behave like a native process on the host system.

Otherwise, Docker’s default networking modes will serve you better in terms of control and compartmentalization.

by: LHB Community
Sun, 20 Apr 2025 12:23:45 +0530


As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation.

What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost?

'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser.

Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync

1. entr

entr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project.

For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time.

Key Features

  • Lightweight, no additional dependencies.
  • Highly customizable
  • Ideal for use in conjunction with scripts or build tools.
  • Linux only.

Installation

All you have to do is type in the following command in the terminal:

sudo apt install -y entr

Usage

Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that:

ls docker-compose.yml | entr -r docker build

Here, -r flag reloads the child process, which is the run command ‘docker build’.

0:00
/0:23

Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code.

ls *.ts | entr bun test
entr usage

2. nodemon

nodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually.

Key Features

  • Monitor file changes and restart Node.js server automatically.
  • Supports JavaScript and TypeScript projects
  • Customize which files and directories to monitor.
  • Supports common web frameworks such as Express, Hapi.

Installation

You can type in a single command in the terminal to install the tool:

npm install -g nodemon

If you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial.

Usage

When you type in the following command, it starts server.js and will automatically restart the server if the file changes.

nodemon server.js
nodemon

3. LiveReload.net

LiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser.

Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server.

Key Features

  • Seamless integration with editors
  • Supports custom trigger conditions to refresh the page
  • Good compatibility with front-end frameworks and static websites.

Usage

livereload

It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes. 

4. fswatch

fswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly.

Key Features

  • Supports cross-platform operation and can be used on Linux and macOS.
  • It can be used with custom scripts to trigger multiple operations.
  • Flexible configuration options to filter specific types of file changes.

Installation

To install it on a Linux distribution, type in the following in the terminal:

sudo apt install -y fswatch

If you have a macOS computer, you can use the command:

brew install fswatch

Usage

You can try typing in the command here:

fswatch -o . | xargs -n1 -I{} make
fswatch

And, then you can chain this command with an entr command for a rich interactive development experience.

ls hellomake | entr -r ./hellomake

The “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver? 

5. Watchexec

Watchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes. 

  Key Features

  • Support cross-platform use (macOS, Linux, Windows).
  • Fast, written in Rust.
  • Lightweight, no complex configuration.

Installation

On Linux, just type in:

sudo apt install watchexec

And, if you want to try it on macOS (via homebrew):

brew install watchexec

You can also download corresponding binaries for your system from the project’s Github releases section.

Usage

All you need to do is just run the command:

watchexec -e py "pytest"

This will run pytests every time a Python file in the current directory is modified.

6. BrowserSync

BrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing.

Key features

  • Cross-browser synchronization.
  • Automatically refreshes multiple devices and browsers.
  • Built-in local development server.

Installation

Considering you have Node.js installed first, type in the following command:

npm i -g browser-sync

Or, you can use:

npx browser-sync

Usage

Here is how the commands for it would look like:

browser-sync start --server --files "/*.css, *.js, *.html"
npx browser-sync start --server --files "/*.css, *.js, *.html"

You can use either of the two commands for your experiments.

browsersync

This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy.

7. watchdog & watchmedo

Watchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action.

Key Features

  • Cross-platform support
  • Provides full flexibility with its Python-based API
  • Includes watchmedo script to hook any CLI application easily

Installation

Install Python first, and then install with pip using the command below:

pip install watchdog

Usage

Type in the following and watch it in action:

watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .
watchdog

This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted.

In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file.

Conclusion

Hot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser.

Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code.

Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: LHB Community
Sat, 19 Apr 2025 15:59:35 +0530


As a Kubernetes engineer, I deal with kubectl almost every day. Pod status, service list, CrashLoopBackOff location, YAML configuration comparison, log view...... are almost daily operations!

But to be honest, in the process of cutting namespaces, manually copying pod names, and scrolling the log again and again, I gradually felt burned out. That is, until I came across KubeTUI — a little tool that made me feel like “getting back on my feet”.

What is KubeTUI

KubeTUI, known as Kubernetes Terminal User Interface, is a Kubernetes visual dashboard that can be used in the terminal. It's not like the traditional kubectl, which lets you memorize and knock out commands, or the Kubernetes Dashboard, which requires a browser, Ingress, and a token to log in to a bunch of configurations.

In a nutshell, it's a tool that lets you happily browse the state of your Kubernetes cluster from your terminal.

Installing KubeTUI

KubeTUI is written in Rust, and you can download its binary releases from Github. Once you do that, you need to set up a Kubernetes environment to build and monitor your application.

Let me show you how that is done, with an example of building a WordPress application.

Setting up the Kubernetes environment

We’ll use K3s to spin up a Kubernetes environment. The steps are mentioned below.

Step 1: Install k3s and run

curl -sfL https://get.k3s.io | sh -

With this single command, k3s will start itself after installation. At later times, you can use the below command to start k3s server. 

sudo k3s server --write-kubeconfig-mode='644'

Here’s a quick explanation of what the command includes :

  • k3s server: It starts the K3s server component, which is the core of the Kubernetes control plane.
  • --write-kubeconfig-mode='644': It ensures that the generated kubeconfig file has permissions that allow the owner to read and write it, and the group and others to only read it. If you start the server without this flag, you need to use sudo for all k3s commands.

Step 2: Check available nodes via kubectl

We need to verify if Kubernetes control plane is actually working before we can make any deployments. You can use the command below to check that:

k3s kubectl get node
kubectl

Step 3: Deploy WordPress using Helm chart (Sample Application)

K3s provides helm integration, which helps manage the Kubernetes application. Simply apply this YAML manifest to spin up WordPress in Kubernetes environment from Bitnami helm chart.

Create a file named wordpress.yaml with the contents:

Content Missing

You can then apply the configuration file to the application using the command:

k3s kubectl apply -f wordpress.yaml

It will take around 2–3 minutes for the whole setup to complete.

Step 4: Launch KubeTUI

To KubeTUI, type in the following command in the terminal.

kubetui
kubetui

Here's what you will see. There are no pods in the default namespace. Let’s switch namespace to wpdev we created earlier by hitting “n”.

change namespaces

How to Use KubeTui

To navigate to different tabs, like switching screens from Pod to Config and Network, you can click with your mouse or press the corresponding number as shown:

kubetui

You can also switch tabs with the keyboard:

kubetui switch tabs

If you need help with Kubetui at any time, press ? to see all the available options.

kubetui help

It integrates a vim-like search mode. To activate search mode, enter /.

Tip for Log filtering 

I discovered an interesting feature to filter logs from multiple Kubernetes resources. For example, say we want to target logs from all pods with names containing WordPress. It will combine logs from both of these pods. We can use the query:

pod:wordpress

You can target different resource types like svc, jobs, deploy, statefulsets, replicasets with the log filtering in place. Instead of combining logs, if you want to remove some pods or container logs, you can achieve it with !pod:pod-to-exclude and !container:container-to-exclude filters.

Conclusion

Working with Kubernetes involves switching between different namespaces, pods, networks, configs, and services. KubeTUI can be a valuable asset in managing and troubleshooting Kubernetes environment. 

I find myself more productive using tools like KubeTUI. Share your thoughts on what tools you’re utilizing these days to make your Kubernetes journey smoother.

CTA Image

Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

by: Abhishek Prakash
Mon, 14 Apr 2025 10:58:44 +0530


Lately, whenever I tried accessing a server via SSH, it asked for a passphrase:

Enter passphrase for key '/home/abhishek/.ssh/id_rsa':

Interestingly, it was asking for my local system's account password, not the remote server's.

Entering the account password for SSH key is a pain. So, I fixed it with this command which basically resets the password:

ssh-keygen -p

It then asked for the file which has the key. This is the private ssh key, usually located in .ssh/id_rsa file. I provided the absolute path for that.

Now it asked for the 'old passphrase' which is the local user account password. I provided it one more time and then just pressed enter for the new passphrase.

❯ ssh-keygen -p
Enter file in which the key is (/home/abhishek/.ssh/id_ed25519): /home/abhishek/.ssh/id_rsa
Enter old passphrase: 
Enter new passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved with the new passphrase.

And thus, it didn't ask you to enter passphrase for the SSH private key anymore. Did not even need a reboot or anything.

Wondering why it happened and how it was fixed? Let's go in detail.

What caused 'Enter passphrase for key' issue?

Here is my efficient SSH workflow. I have the same set of SSH keys on my personal systems, so I don't have to create them new and add them to the servers when I install a new distro.

Since the public SSH key is added to the servers, I don't have to enter the root password for the servers every time I use SSH.

And then I have an SSH config file in place that maps the server's IP address with an easily identifiable name. It further smoothens my workflow.

Recently, I switched my personal system to CachyOS. I copied my usual SSH keys from an earlier backup and gave them the right permission.

But when I tried accessing any server, it asked for a passphrase:

Enter passphrase for key '/home/abhishek/.ssh/id_rsa':

No, it was not the remote server's user-password. It asked for my regular, local system's password as if I were using sudo.

I am guessing that some settings somewhere were left untouched and it started requiring a password to unlock the private SSH key.

This is an extra layer of security, and I don't like the inconvenience that comes with it.

One method to use SSH without entering the password each time to unlock is to reset the password on the SSH key.

And that's what you saw at the beginning of this article.

Fixing it by resetting the password on SSH key

Note down the location of your SSH private key. Usually, it is ~/.ssh/id_rsa unless you have multiple SSH key sets for different servers.

Enter the following command to reset the password on an SSH key:

ssh-keygen -p

It will ask you for the path to key. Provide the absolute path to your private SSH key.

Enter file in which the key is (/home/abhishek/.ssh/id_ed25519):

It then asks to enter old passphrase which should your local account's password. The same one that you use for sudo.

Enter old passphrase:

Once you have entered that, it will ask you to enter new passphrase. Keep it empty by pressing the enter key. This way, it won't have any password.

Enter new passphrase (empty for no passphrase):

Press enter key again when it asks:

Enter same passphrase again:

And that's about it.

Reset the password on ssh key to fix the passphrase issue

You can instantly verify it. You don't need to reboot the system or even log out from the terminal.

Enjoy SSH 😄

by: Abhishek Prakash
Fri, 11 Apr 2025 17:22:49 +0530


Linux can feel like a big world when you're just getting started — but you don’t have to figure it all out on your own.

Each edition of LHB Linux Digest brings you clear, helpful articles and quick tips to make everyday tasks a little easier.

Chances are, a few things here will click with you — and when they do, try working them into your regular routine. Over time, those small changes add up and before you know it, you’ll feel more confident and capable navigating your Linux setup.

Here are the highlights of this edition:

  • Running sudo without password
  • Port mapping in Docker
  • Docker log viewer tool
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by Typesense.

❇️ Typesense, Open Source Algolia Alternative

Typesense is the free, open-source search engine for forward-looking devs.

Make it easy on people: Tpyos? Typesense knows we mean typos, and they happen. With ML-powered typo tolerance and semantic search, Typesense helps your customers find what they’re looking for—fast.

👉 Check them out on GitHub.

GitHub - typesense/typesense: Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences -…
by: Umair Khurshid
Tue, 08 Apr 2025 12:11:49 +0530


Port management in Docker and Docker Compose is essential to properly expose containerized services to the outside world, both in development and production environments.

Understanding how port mapping works helps avoid conflicts, ensures security, and improves network configuration.

This tutorial will walk you understand how to configure and map ports effectively in Docker and Docker Compose.

What is port mapping in Docker?

Port mapping exposes network services running inside a container to the host, to other containers on the same host, or to other hosts and network devices. It allows you to map a specific port from the host system to a port on the container, making the service accessible from outside the container.

In the schematic below, there are two separate services running in two containers and both use port 80. Now, their ports are mapped with hosts port 8080 and 8090 and thus they are accessible from outside using these two ports.

Docker port mapping example

How to map ports in Docker

Typically, a running container has its own isolated network namespace with its own IP address. By default, containers can communicate with each other and with the host system, but external network access is not automatically enabled.

Port mapping is used to create communication between the container's isolated network and the host system's network.

For example, let's map Nginx to port 80:

docker run -d --publish 8080:80 nginx

The --publish command (usually shortened to -p) is what allows us to create that association between the local port (8080) and the port of interest to us in the container (80).

In this case, to access it, you simply use a web browser and access http://localhost:8080

On the other hand, if the image you are using to create the container has made good use of the EXPOSE instructions, you can use the command in this other way:

docker run -d --publish-all hello-world

Docker takes care of choosing a random port (instead of the port 80 or other specified ports) on your machine to map with those specified in the Dockerfile:

Mapping ports with Docker Compose

Docker Compose allows you to define container configurations in a docker-compose.yml. To map ports, you use the ports YAML directive.

version: '3.8'
services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"

In this example, as in the previous case, the Nginx container will expose port 80 on the host's port 8080.

Port mapping vs. exposing

It is important not to confuse the use of portswith expose directives. The former creates a true port forwarding to the outside. The latter only serves to document that an internal port is being used by the container, but does not create any exposure to the host.

services:
  app:
    image: myapp
    expose:
      - "3000"

In this example, port 3000 will only be accessible from other containers in the same Docker network, but not from outside.

Mapping Multiple Ports

You just saw how to map a single port, but Docker also allows you to map more than one port at a time. This is useful when your container needs to expose multiple services on different ports.

Let's configure a nginx server to work in a dual stack environment:

docker run -p 8080:80 -p 443:443 nginx

Now the server to listen for both HTTP traffic on port 8080, mapped to port 80 inside the container and HTTPS traffic on port 443, mapped to port 443 inside the container.

Specifying host IP address for port binding

By default, Docker binds container ports to all available IP addresses on the host machine. If you need to bind a port to a specific IP address on the host, you can specify that IP in the command. This is useful when you have multiple network interfaces or want to restrict access to specific IPs.

docker run -p 192.168.1.100:8080:80 nginx

This command binds port 8080 on the specific IP address 192.168.1.100 to port 80 inside the container.

Port range mapping

Sometimes, you may need to map a range of ports instead of a single port. Docker allows this by specifying a range for both the host and container ports. For example,

docker run -p 5000-5100:5000-5100 nginx

This command maps a range of ports from 5000 to 5100 on the host to the same range inside the container. This is particularly useful when running services that need multiple ports, like a cluster of servers or applications with several endpoints.

Using different ports for host and container

In situations where you need to avoid conflicts, security concerns, or manage different environments, you may want to map different port numbers between the host machine and the container. This can be useful if the container uses a default port, but you want to expose it on a different port on the host to avoid conflicts.

docker run -p 8081:80 nginx

This command maps port 8081 on the host to port 80 inside the container. Here, the container is still running its web server on port 80, but it is exposed on port 8081 on the host machine.

Binding to UDP ports (if you need that)

By default, Docker maps TCP ports. However, you can also map UDP ports if your application uses UDP. This is common for protocols and applications that require low latency, real-time communication, or broadcast-based communication.

For example, DNS uses UDP for query and response communication due to its speed and low overhead. If you are running a DNS server inside a Docker container, you would need to map UDP ports.

docker run -p 53:53/udp ubuntu/bind9

Here this command maps UDP port 53 on the host to UDP port 53 inside the container.

Inspecting and verifying port mapping

Once you have set up port mapping, you may want to verify that it’s working as expected. Docker provides several tools for inspecting and troubleshooting port mappings.

To list all active containers and see their port mappings, use the docker ps command. The output includes a PORTS column that shows the mapping between the host and container ports.

docker ps

This might output something like:

inspecting and verifying port mapping in Docker

If you more detailed information about a container's port mappings, you can use docker inspect. This command gives you a JSON output with detailed information about the container's configuration.

docker inspect <container_id> | grep "Host"

This command will display the port mappings, such as:

Wrapping Up

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

When you are first learning Docker, one of the more tricky topics is often networking and port mapping. I hope this rundown has helped clarify how port mapping works and how you can effectively use it to connect your containers to the outside world and orchestrate services across different environments.

by: Team LHB
Mon, 07 Apr 2025 17:16:55 +0530


After years of training DevOps students and taking interviews for various positions, I have compiled this list of Docker interview ques tions (with answers) that are generally asked in the technical round.

I have categorized them into various levels:

  • Entry level (very basic Docker questions)
  • Mid-level (slightly deep in Docker)
  • Senior-level (advanced level Docker knowledge)
  • Common for all (generic Docker stuff for all)
  • Practice Dockerfile examples with optimization challenge (you should love this)

If you are absolutely new to Docker, I highly recommend our Docker course for beginners.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

Let's go.

Entry level Docker questions

What is Docker?

Docker is a containerization platform that allows you to package an application and its dependencies into a container. Unlike virtualization, Docker containers share the host OS kernel, making them more lightweight and efficient.

What is Containerization?

It’s a way to package software in a format that can run isolated on a shared OS.

What are Containers?

Containers are packages which contains application with all its needs such as libraries and dependencies

What is Docker image?

  • Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform.
  • It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users.

What is Docker Compose?

It is a tool for defining and running multi-container Docker applications.

What’s the difference between virtualization and containerization?

Virtualization abstracts the entire machine with separate VMs, while containerization abstracts the application with lightweight containers sharing the host OS.

Describe a Docker container’s lifecycle

Create | Run | Pause | Unpause | Start | Stop | Restart | Kill | Destroy

Docker lifecycle

What is a volume in Docker, and which command do you use to create it?

  • A volume in Docker is a persistent storage mechanism that allows data to be stored and accessed independently of the container's lifecycle.
  • Volumes enable you to share data between containers or persist data even after a container is stopped or removed.
docker volume create <volume_name>
TO KNOW : example :- docker run -v data_volume:/var/lib/mysql mysql

What is Docker Swarm?

Docker Swarm is a tool for clustering & managing containers across multiple hosts.

How do you remove unused data in Docker?

Use docker system prune to remove unused data, including stopped containers, unused networks, and dangling images.

Mid-level Docker Questions

What command retrieves detailed information about a Docker container?

Use docker inspect <container_id> to get detailed JSON information about a specific Docker container.

How do the Docker Daemon and Docker Client interact?

The Docker Client communicates with the Docker Daemon through a REST API over a Unix socket or TCP/IP

How can you set CPU and memory limits for a Docker container?

Use docker run --memory="512m" --cpus="1.5" <image> to set memory and CPU limits.

Can a Docker container be configured to restart automatically?

Yes, a Docker container can be configured to restart automatically using restart policies such as --restart always or --restart unless-stopped.

What methods can you use to debug issues in a Docker container?

  • Inspect logs with docker logs <container_id> to view output and error messages.
  • Execute commands interactively using docker exec -it <container_id> /bin/bash to access the container's shell.
  • Check container status and configuration with docker inspect <container_id>.
  • Monitor resource usage with docker stats to view real-time performance metrics.
  • Use Docker's built-in debugging tools and third-party monitoring solutions for deeper analysis.

What is the purpose of Docker Secrets?

Docker Secrets securely manage sensitive data like passwords for Docker services. Use docker secret create <secret_name> <file> to add secrets.

What are the different types of networks in Docker, and how do they differ?

Docker provides several types of networks to manage how containers communicate with each other and with external systems.

Here are the main types:

  • Bridge
  • None
  • Host
  • Overlay Network
  • Macvlan Network
  • IPvlan Network

bridge: This is the default network mode. Each container connected to a bridge network gets its own IP address and can communicate with other containers on the same bridge network using this IP

docker run ubuntu

Useful for scenarios where you want isolated containers to communicate through a shared internal network.

none: Containers attached to the none network are not connected to any network. They don't have any network interfaces except the loopback interface (lo).

docker run ubuntu --network=none

Useful when you want to create a container with no external network access for security reasons.

host: The container shares the network stack of the Docker host, which means it has direct access to the host's network interfaces. There's no isolation between the container and the host network.

docker run ubuntu --network=host

Useful when you need the highest possible network performance, or when you need the container to use a service on the host system.

Overlay Network : Overlay networks connect multiple Docker daemons together, enabling swarm services to communicate with each other. It's used in Docker Swarm mode for multi-host networking.

docker network create -d overlay my_overlay_network

Useful for distributed applications that span multiple hosts in a Docker Swarm.

Macvlan Network : Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on the network. The container can communicate directly with the physical network using its own IP address.

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_macvlan_network

Useful when you need containers to appear as physical devices on the network and need full control over the network configuration.

IPvlan Network: Similar to Macvlan, but uses different methods to route packets. It's more lightweight and provides better performance by leveraging the Linux kernel's built-in network functionalities.

docker network create -d ipvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 my_ipvlan_network

Useful for scenarios where you need low-latency, high-throughput networking with minimal overhead.

Explain the main components of Docker architecture

Docker consists of the Docker Host, Docker Daemon, Docker Client, and Docker Registry.

  • The Docker Host is the computer (or server) where Docker is installed and running. It's like the home for Docker containers, where they live and run.
  • The Docker Daemon is a background service that manages Docker containers on the Docker Host. It's like the manager of the Docker Host, responsible for creating, running, and monitoring containers based on instructions it receives.
  • The Docker Client communicates with the Docker Daemon, which manages containers.
  • The Docker Registry stores and distributes Docker images.

How does a Docker container differ from an image?

A Docker image is a static, read-only blueprint, while a container is a running instance of that image. Containers are dynamic and can be modified or deleted without affecting the original image.

Explain the purpose of a Dockerfile.

Dockerfile is a script containing instructions to build a Docker image. It specifies the base image, sets up the environment, installs dependencies, and defines how the application should run.

How do you link containers in Docker?

Docker provides network options to enable communication between containers. Docker Compose can also be used to define and manage multi-container applications.

How can you secure a Docker container?

Container security involves using official base images, minimizing the number of running processes, implementing least privilege principles, regularly updating images, and utilizing Docker Security Scanning tools. ex. Docker vulnerability scanning.

Difference between ARG & ENV?

  • ARG is for build-time variables, and its scope is limited to the build process.
  • ENV is for environment variables, and its scope extends to both the build process and the running container.

Difference between RUN, ENTRYPOINT & CMD?

  • RUN : Executes a command during the image build process, creating a new image layer.
  • ENTRYPOINT : Defines a fixed command that always runs when the container starts. Note : using --entrypoint we can overridden at runtime.
  • CMD : Specifies a default command or arguments that can be overridden at runtime.

Difference between COPY & ADD?

  • If you are just copying local files, it's often better to use COPY for simplicity.
  • Use ADD when you need additional features like extracting compressed archives or pulling resources from URLs.

How do you drop the MAC_ADMIN capability when running a Docker container?

Use the --cap-drop flag with the docker run command:

docker run --cap-drop MAC_ADMIN ubuntu

How do you add the NET_BIND_SERVICE capability when running a Docker container?

Use the --cap-drop flag with the docker run command:

docker run --cap-add NET_BIND_SERVICE ubuntu

How do you run a Docker container with all privileges enabled?

Use the --privileged flag with the docker run command:

docker run --privileged ubuntu
by: Abhishek Prakash
Fri, 28 Mar 2025 18:10:14 +0530


Welcome to the latest edition of LHB Linux Digest. I don't know if you have noticed but I have changed the newsletter day from Wednesday to Friday so that you can enjoy your Fridays learning something new and discovering some new tool. Enjoy 😄

Here are the highlights of this edition :

  • Creating a .deb package from Python app
  • Quick Vim tip on indentation
  • Pushing Docker image to Hub
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

❇️ Self-hosting without hassle

PikaPods allows you to quickly deploy your favorite open source software. All future updates are handled automatically by PikaPods while you enjoy using the software. I use it to self host Umami analytics.

Oh! You get $5 free credit, so try it out and see if you could rely on PikaPods.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1.20/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
by: Abhishek Kumar
Fri, 28 Mar 2025 17:18:28 +0530


Docker has changed the way we package and distribute applications, but I only truly appreciated its power when I needed to share a project with a friend.

Initially, we used docker save and docker load to transfer the image, which worked fine but was cumbersome.

Then, while browsing the Docker documentation, I discovered how easy it was to push images to Docker Hub.

That was a game-changer! Now, I push my final builds to Docker Hub the moment they're done, allowing my clients and collaborators to pull and run them effortlessly.

In this guide, I’ll walk you through building, tagging, pushing, and running a Docker image.

To keep things simple, we’ll create a minimal test image.

💡
If you're new to Docker and want a deep dive, check out our DevOps course, which covers Docker extensively. We’ve covered Docker installation in countless tutorials as well, so we’ll skip that part here and jump straight into writing a Dockerfile.

Writing a simple Dockerfile

A Dockerfile defines how to build your image. It contains a series of instructions that tell Docker how to construct the image layer by layer.

Let’s create a minimal one:

# Use an official lightweight image
FROM alpine:latest

# Install a simple utility
RUN apk add --no-cache figlet

# Set the default command
CMD ["/usr/bin/figlet", "Docker is Fun!"]
  • FROM alpine:latest – This sets the base image to Alpine Linux, a minimal and lightweight distribution.
  • RUN apk add --no-cache figlet – Installs the figlet package using Alpine's package manager (apk), with the --no-cache option to keep the image clean.
  • CMD ["/usr/bin/figlet", "Docker is Fun!"] – Specifies the default command that will run when a container is started.

Save this file as Dockerfile in an empty directory.

creating dockerfile

Building the docker image

To build the image, navigate to the directory containing the Dockerfile and run:

docker build -t <cool-image-name> .
  • docker build – The command to build an image.
  • -t cool-image-name – The -t flag assigns a tag (cool-image-name) to the image, making it easier to reference later.
  • . – The dot tells Docker to look for the Dockerfile in the current directory.
building docker image

Once completed, list your images to confirm:

docker images
viewing installed docker images

Running the docker image

To run the container and see the output:

docker run <cool-image-name>

You should see an ASCII text saying, “Docker is fun!”

running the created docker image

Tagging the Image

Before pushing to a registry, we need to tag the image with our Docker Hub username:

docker tag <cool-image-name> your-dockerhub-username/cool-image-name:latest
  • docker tag – Creates an alias for the image.
  • your-dockerhub-username/cool-image-name:latest – This follows the format username/repository-name:tag. The latest tag is used as a default version identifier.

List images again to see the updated tag:

docker images
tagging the created lhb-tutorial image with username followed by image name

Pushing to Docker Hub

First, log in to Docker Hub:

docker login
💡
If you’re using two-factor authentication, you’ll need to generate an access token from Docker Hub and use that instead of your password.

You will be prompted to enter your Docker Hub username and password.

logging in docker

Once authenticated, you can push the image:

docker push your-dockerhub-username/cool-image-name:latest
pushing image to docker hub

And that’s it! Your image is now live on Docker Hub.

docker hub repository for lhb-tutorial

Anyone can pull and run it with:

docker pull your-dockerhub-username/cool-image-name:latest
docker run your-dockerhub-username/cool-image-name

Feels great, doesn’t it?

Alternatives to Docker Hub

Docker Hub is not the only place to store images. Here are some alternatives:

Self-hosted Docker Registry

If you need complete control over your images, you can set up your own registry by running:

docker run -d -p 5000:5000 --name registry registry:2

This starts a private registry on port 5000, allowing you to store and retrieve images without relying on external providers.

You can read more about this in docker's official documentation to host your own docker registry.

Final thoughts

Building and pushing Docker images has completely streamlined how I distribute my projects.

What once felt like a tedious process is now as simple as writing a Dockerfile, tagging an image, and running a single push command.

No more manual file transfers or complex setup steps, it’s all automated and ready to be pulled anywhere.

However, Docker Hub's free tier limits private repositories to just one. For personal projects, that’s a bit restrictive, which is why I’m more inclined toward self-hosting my own Docker registry.

It gives me complete control, avoids limitations, and ensures I don’t have to worry about third-party policies.

What about you? Which container registry do you use for your projects? Have you considered self-hosting your own registry? Drop your thoughts in the comments.

by: Abhishek Prakash
Fri, 28 Mar 2025 12:29:12 +0530


Que: How do I go to the root directory in Linux command line?

The simple answer is, you type the cd command like this:

cd /

That will put you in the root directory, which is the starting point for the Linux directory structure.

Linux directory structure

If you want to go to the /root directory (i.e. home directory of the root user), you'll have to use:

cd /root

I know that a new Linux users can be confused with the notation of root directory (/) and the /root directory.

Understand the difference between / and /root

New Linux users often confuse two important concepts: the root directory (/) and the root user's home directory (/root). They sound similar but serve different purposes:

The root directory (/):

  • This is the primary directory that contains all other directories and files on your Linux system
  • It's the starting point of the filesystem hierarchy
  • All paths in Linux begin from this location
  • You can access it using cd / regardless of your current location

The root user's home directory (/root):

  • This is the home directory for the root user (the superuser with all the access)
  • It's located at /root (a directory inside the root directory)
  • Regular users may or may not have permission to access this directory

Learn more about the Linux directory structure in the GNU/Linux filesystem documentation.

💡
Navigating to the root directory doesn't require special privileges. A non-root user can also enter the root directory. However, modifying files there requires root permissions.

Understanding / as root directory vs directory separator

Forward slash used as root and as directory separator

The forward slash (/) in Linux serves dual purposes, which can be confusing for newcomers:

As the root directory:

  • When used alone or at the beginning of a path, it refers to the root directory
  • Example: cd / or cd /home/user

As a directory separator:

  • When used between directory names, it separates different levels in the path
  • Example: In /home/user/Documents, the slashes separate the directories

This dual usage is important to understand for proper navigation. When you see a path like /var/log/syslog:

  • The first / indicates we're starting from the root directory
  • The subsequent / characters are separators between directories
# Go to a directory using an absolute path (starting from root)
cd /var/log

# Go to a directory using a relative path (from current location)
cd Documents/Projects

💡Use special navigation shortcuts

Linux provides handy shortcuts for directory navigation:

cd /         # Go to root directory
cd ~         # Go to your home directory
cd -         # Go to previous directory
cd ..        # Go up one level
cd ../..     # Go up two levels

These shortcuts save time and make navigation more efficient.

Conclusion

Understanding how to navigate to and from the root directory is fundamental to working effectively in Linux. The root directory (/) serves as the foundation of your filesystem, distinct from the root user's home directory (/root).

By mastering the concepts of absolute vs relative paths and understanding the dual role of the forward slash, you'll be well-equipped to navigate your Linux system confidently.

by: LHB Community
Thu, 27 Mar 2025 21:33:13 +0530


Kubernetes is a powerful tool for managing containerized applications, and one of its key features is the ability to run specific workloads across your cluster. One such workload is the DaemonSet, a Kubernetes API object designed to ensure that a copy of a Pod runs on every Node in your cluster.

In this article, we’ll explore what DaemonSets are, how they work, and when to use them.

What is a Kubernetes DaemonSet?

A DaemonSet is a Kubernetes object that ensures a specific Pod runs on every Node in your cluster. When new Nodes are added, the DaemonSet automatically schedules the Pod on them. Similarly, when Nodes are removed, the Pods are cleaned up. This makes DaemonSets ideal for running background services that need to be present on every Node, such as monitoring agents, log collectors, or backup tools.

Key Features of DaemonSets:

  • Automatic Pod Scheduling: DaemonSets ensure that a Pod runs on every Node, even as Nodes are added or removed.
  • Tolerations: DaemonSets can schedule Pods on Nodes with resource constraints or other restrictions that would normally prevent scheduling.
  • Node-Specific Customization: You can configure DaemonSets to run Pods only on specific Nodes using labels and selectors.

When Should You Use a DaemonSet?

DaemonSets are particularly useful for workloads that need to run on every Node in your cluster. Here are some common use cases:

  • Node Monitoring Agents: Tools like Prometheus Node Exporter or Datadog agents need to run on every Node to collect metrics.
  • Log Collection: Services like Fluentd or Logstash can be deployed as DaemonSets to collect logs from each Node.
  • Backup Tools: Backup agents that need to interact with Node-level data can be deployed as DaemonSets to ensure all Nodes are covered.
  • Network Plugins: Tools like Calico or Weave Net that provide networking functionality often run as DaemonSets to ensure they’re present on every Node.

Unlike ReplicaSets or Deployments, which schedule Pods based on resource availability, DaemonSets are tied to the number of Nodes in your cluster.

Example: Deploying a DaemonSet

Let’s walk through a simple example of deploying a DaemonSet in your Kubernetes cluster. For this tutorial, we’ll use Filebeat, a lightweight log shipper that collects logs and forwards them to Elasticsearch or Logstash.

You can use Minikube to create a local cluster with three Nodes:

minikube start --nodes=3

Step 1: Create a DaemonSet Manifest

Here’s a basic DaemonSet manifest for Filebeat:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:8.10.0
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Step 2: Apply the Manifest

Save the manifest to a file filebeat.yaml and apply it to your cluster:

kubectl apply -f filebeat.yaml

Step 3: Verify the DaemonSet

Check the status of the DaemonSet and the Pods it created:

kubectl get daemonsets

Output:

NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
filebeat   3         3         3       3            3           <none>          10s

For detailed information, run:

kubectl get pods -o wide

Output:

NAME             READY   STATUS    RESTARTS   AGE   IP           NODE
filebeat-abc12   1/1     Running   0          30s   10.244.1.2   minikube-m02
filebeat-def34   1/1     Running   0          30s   10.244.2.2   minikube-m03
filebeat-ghi56   1/1     Running   0          30s   10.244.0.3   minikube

Scoping DaemonSets to Specific Nodes

Sometimes, you may want to run DaemonSet Pods only on specific Nodes. You can achieve this using nodeSelectors or affinity rules. For example, to run Filebeat only on Nodes labeled with log-collection-enabled=true, update the DaemonSet manifest:

spec:
  template:
    spec:
      nodeSelector:
        log-collection-enabled: "true"

Then, label the desired Node:

kubectl label node <node-name> log-collection-enabled=true

Apply the updated manifest, and the DaemonSet will only schedule Pods on the labeled Node.

kubectl apply -f filebeat.yaml

Check the DaemonSet status:

kubectl get daemonsets

Output:

NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
filebeat   1         1         1       1            1           log-collection-enabled=true   1m

View the Pod list to confirm the Pod is running on the labeled Node:

kubectl get pods -o wide

Output:

NAME             READY   STATUS    RESTARTS   AGE   IP           NODE
filebeat-abc12   1/1     Running   0          2m    10.244.1.2   minikube-m02

Scaling a DaemonSet

DaemonSets are automatically scaled based on the number of Nodes in your cluster. To scale a DaemonSet:

  • Add Nodes: New Nodes will automatically run the DaemonSet Pods.
  • Remove Nodes: Pods on removed Nodes will be cleaned up.

If you need to temporarily scale a DaemonSet to 0 (e.g., for maintenance), you can patch the DaemonSet with a dummy nodeSelector:

kubectl patch daemonset <daemonset-name> -p '{"spec": {"nodeSelector": {"dummy": "true"}}}'

To scale it back up, remove the dummy selector.

kubectl patch daemonset filebeat -p '{"spec": {"template": {"spec": {"nodeSelector": {}}}}}'

DaemonSet Best Practices

Use DaemonSets for Node-Specific Workloads: Only use DaemonSets when your Pods need to run on every Node or a subset of Nodes.

  • Set Restart Policies Correctly: Ensure Pods have a restartPolicy of Always to ensure they restart with the Node.
  • Avoid Manual Pod Management: Don’t manually edit or delete DaemonSet Pods, as this can lead to orphaned Pods.
  • Leverage Rollbacks: Use Kubernetes’ rollback feature to revert DaemonSet changes quickly if something goes wrong.

Conclusion

Whether you’re collecting logs with Filebeat, monitoring Nodes with Prometheus, or managing backups, DaemonSets provide a reliable and scalable solution. By understanding how to create, configure, and manage DaemonSets, you can ensure that your Node-level workloads are always running where they’re needed most.

by: Abhishek Kumar
Tue, 18 Mar 2025 10:58:47 +0530


You know that moment when you dive into a project, thinking, "This should be easy," and then hours later, you're buried under obscure errors, outdated forum posts, and conflicting advice?

Yeah, that was me while trying to package my Python app into a .deb file.

It all started with my attempt to revive an old project, which some of our long-time readers might remember - Compress PDF.

pdf compressor tool by its foss
PDF Compressor Tool v1.0 by Its FOSS

Since I’ve been learning Python these days, I thought, why not customize the UI, tweak the logic, and give it a fresh start?

The python app was working great when running inside a virtual environment but I was more interested in shipping this app as a .deb binary, making installation as simple as dpkg -i app.deb.

Every tutorial I read online, covered bits and pieces, but none walked me through the entire process. So here I am, documenting my experience while packaging my script into a .deb file.

Choosing the right packaging tool

For turning a Python script into an executable, I am using PyInstaller. Initially, I tried using py2deb, a tool specifically meant for creating .deb packages.

Bad idea. Turns out, py2deb hasn’t been maintained in years and doesn’t support newer Python versions.

PyInstaller takes a Python script and bundles it along with its dependencies into a single standalone executable. This means users don’t need to install Python separately, it just works out of the box.

Step 1: Install PyInstaller

First, make sure you have PyInstaller installed. If not, install it using pip:

pip install pyinstaller

Check if it's installed correctly:

pyinstaller --version

Step 2: Create the .deb package structure

To keep things clean and structured, .deb packages follows a specific folder structure.

compressor/
├── pdf-compressor/                
│   ├── DEBIAN/                     
│   │   ├── control
│   │   ├── postinst
│   ├── usr/                         
│   │   ├── bin/
│   │   ├── share/
│   │   │   ├── applications/
│   │   │   ├── icons/
│   │   │   ├── pdf-compressor/

Let’s create it:

mkdir -p pdf-compressor/DEBIAN
mkdir -p pdf-compressor/usr/bin
mkdir -p pdf-compressor/usr/share/applications
mkdir -p pdf-compressor/usr/share/icons/
mkdir -p pdf-compressor/usr/share/pdf-compressor/
creating empty directories

What each directory is for?

  • usr/bin/: Stores the executable file.
  • usr/share/applications/: Contains the .desktop file (so the app appears in the system menu).
  • usr/share/icons/: Stores the app icon.
  • DEBIAN/: Contains metadata like package info and dependencies.

Optional: Packaging dependencies

Before packaging the app, I wanted to ensure it loads assets and dependencies correctly whether it's run as a script or a standalone binary.

Initially, I ran into two major problems:

  1. The in-app logo wasn’t displaying properly because asset paths were incorrect when running as a packaged executable.
  2. Dependency errors occurred when running the app as an executable.

To keep everything self-contained and avoid conflicts with system packages, I created a virtual environment inside: pdf-compressor/usr/share/pdf-compressor

python3 -m venv venv
source venv/bin/activate

Then, I installed all the dependencies inside it:

pip install -r requirements.txt
deactivate

This ensures that dependencies are bundled properly and won’t interfere with system packages.

Now to ensure that the app correctly loads assets and dependencies, I modified the script as follows:

import sys
import os

# Ensure the virtual environment is used
venv_path = "/usr/share/pdf-compressor/venv"
if os.path.exists(venv_path):
    sys.path.insert(0, os.path.join(venv_path, "lib", "python3.10", "site-packages"))

# Detect if running as a standalone binary
if getattr(sys, 'frozen', False):
    app_dir = sys._MEIPASS  # PyInstaller's temp folder
else:
    app_dir = os.path.dirname(os.path.abspath(__file__))

# Set correct paths for assets
icon_path = os.path.join(app_dir, "assets", "icon.png")
logo_path = os.path.join(app_dir, "assets", "itsfoss-logo.webp")
pdf_icon_path = os.path.join(app_dir, "assets", "pdf.png")

print("PDF Compressor is running...")

What’s happening here?

  • sys._MEIPASS → When the app is packaged with PyInstaller, assets are extracted to a temporary folder. This ensures they are accessible.
  • Virtual environment path (/usr/share/pdf-compressor/venv) → If it exists, it is added to sys.path, so installed dependencies can be found.
  • Assets paths → Dynamically assigned so they work in both script and standalone modes.

After making these changes, my issue was mostly resolved.

📋
I know there are other ways to handle this, but since I'm still learning, this approach worked well for me. If I find a better solution in the future, I’ll definitely improve it!

Step 3: Compiling python script into executable binary

Now comes the exciting part, turning the Python script into a standalone executable. Navigate to the root directory where the main Python script is located. Then run:

pyinstaller --name=pdf-compressor --onefile --windowed --add-data "assets:assets" pdf-compressor.py
  • --onefile: Packages everything into a single executable file
  • --windowed: Hides the terminal (useful for GUI apps)
  • --name=compress-pdf: Sets the output filename
  • --add-data "assets:assets" → Ensures images/icons are included.
creating the standalone executable file from a python script

After this, PyInstaller will create a dist/ , inside, you'll find compress-pdf . This is the standalone app!

Try running it:

./dist/pdf-compressor

If everything works as expected, you’re ready to package it into a .deb file.

Step 4: Move the executable to the correct location

Now, move the standalone executable into the bin directory:

mv dist/compress-pdf pdf-compressor/usr/bin/pdf-compressor
moving the executable to the bin directory

Step 5: Add an application icon

I don't know about you but to me, an app without an icon or just generic gear icons feels incomplete. Icon gives the vibe to your app.

Let’s place the assets directory which contains the icon and logo files inside the right directory:

cp assets/ pdf-compressor/usr/share/pdf-compressor
copying the assets directory to the correct directory used by packaged binary

Step 6: Create a desktop file

To make the app appear in the system menu, we need a .desktop file. Open a new file:

nano pdf-compressor/usr/share/applications/pdf-compressor.desktop

Paste this content:

[Desktop Entry]
Name=PDF Compressor
Comment=Compress PDF files easily
Exec=/usr/bin/pdf-compressor
Icon=/usr/share/icons/pdf-compressor.png
Terminal=false
Type=Application
Categories=Utility
  • Exec → Path to the executable.
  • Icon → App icon location.
  • Terminal=false → Ensures it runs as a GUI application.
creating desktop file for pdf compressor

Save and exit (CTRL+X, then Y, then Enter).

Step 7: Create the control file

At the heart of every .deb package is a metadata file called control.

This file is what tells the Debian package manager (dpkg) what the package is, who maintains it, what dependencies it has, and a brief description of what it does.

That’s why defining them here ensures a smooth experience for users.

Inside the DEBIAN/ directory, create a control file:

nano pdf-compressor/DEBIAN/control

then I added the following content in it:

Package: pdf-compressor
Version: 1.0
Section: utility
Priority: optional
Architecture: amd64
Depends: python3, ghostscript
Recommends: python3-pip, python3-venv
Maintainer: Your Name <your@email.com>
Description: A simple PDF compression tool.
 Compress PDF files easily using Ghostscript.

Step 8: Create the postinst script

The post-installation (postinst) script as the name suggests is executed after the package is installed. It ensures all dependencies are correctly set up.

nano pdf-compressor/DEBIAN/postinst

Add this content:

#!/bin/bash
set -e  # Exit if any command fails

echo "Setting up PDF Compressor..."
chmod +x /usr/bin/pdf-compressor

# Install dependencies inside a virtual environment
python3 -m venv /usr/share/pdf-compressor/venv
source /usr/share/pdf-compressor/venv/bin/activate
pip install --no-cache-dir pyqt6 humanize

echo "Installation complete!"
update-desktop-database

What’s happening here?

  • set -e → Ensures the script stops on failure.
  • Creates a virtual environment → This allows dependencies to be installed in an isolated way.
  • chmod +x /usr/bin/pdf-compressor → Ensures the binary is executable.
  • update-desktop-database → Updates the system’s application database.
creating postinst file

Setting up the correct permission for postinst is important:

chmod 755 pdf-compressor/DEBIAN/postinst
postinst permission error

Step 9: Build & Install the deb package

After all the hard work, it's finally time to bring everything together. To build the package, we’ll use dpkg-deb --build, a built-in Debian packaging tool.

This command takes our structured pdf-compressor directory and turns it into a .deb package that can be installed on any Debian-based system.

dpkg-deb --build pdf-compressor

If everything goes well, you should see output like:

dpkg-deb: building package 'pdf-compressor' in 'pdf-compressor.deb'.
building the deb file from the executable binary

Now, let’s install it and see our application in action!

sudo dpkg -i pdf-compressor.deb
installing the pdf-compressor.deb file
💡
If installation fails due to missing dependencies, fix them using: sudo apt install -f

This installs pdf-compressor onto your system just like any other Debian package. To verify, you can either launch it from the Applications menu or directly via terminal:

pdf-compressor
pdf compressor v2.0 running on lubuntu vm
PDF Compressor v2.0 running inside Lubuntu | P.S. I know the color scheme could have been better 😅

Final thoughts

Packaging a Python application isn’t as straightforward as I initially thought. During my research, I couldn't find any solid guide that walks you through the entire process from start to finish.

So, I had to experiment, fail, and learn, and that’s exactly what I’ve shared in this guide. Looking back, I realize that a lot of what I struggled with could have been simplified had I known better. But that’s what learning is all about, right?

I believe that this write-up will serve as a good starting point for new Python developers like me who are still struggling to package their projects.

That said, I know this isn’t the only way to package Python applications, there are probably better and more efficient approaches out there. So, I’d love to hear from you!

Also, if you found this guide helpful, be sure to check out our PDF Compressor project on GitHub. Your feedback, contributions, and suggestions are always welcome!

Happy coding! 😊

by: Abhishek Prakash
Wed, 05 Mar 2025 20:45:06 +0530


The whereis command helps users locate the binary, source, and manual page files for a given command. And in this tutorial, I will walk you through practical examples to help you understand how to use whereis command.

Unlike other search commands like find that scan the entire file system, whereis searches predefined directories, making it faster and more efficient.

It is particularly useful for system administrators and developers to locate files related to commands without requiring root privileges.

whereis Command Syntax

To use any command to its maximum potential, it is important to know its syntax and that is why I'm starting this tutorial by introducing the syntax of the whereis command:

whereis [OPTIONS] FILE_NAME...

Here,

  • OPTIONS: Flags that modify the search behavior.
  • FILE_NAME: The name of the file or command to locate.

Now, let's take a look at available options of the whereis command:

Flag Description
-b Search only for binary files.
-s Search only for source files.
-m Search only for manual pages.
-u Search for unusual files (files missing one or more of binary, source, or manual).
-B Specify directories to search for binary files (must be followed by -f).
-S Specify directories to search for source files (must be followed by -f).
-M Specify directories to search for manual pages (must be followed by -f).
-f Terminate directory lists provided with -B, -S, or -M, signaling the start of file names.
-l Display directories that are searched by default.

To find all files (binary, source, and manual) related to a command, all you have to do is append the command name to the whereis command as shown here:

whereis command

For example, if I want to locate all files related to bash, then I would use the following:

whereis bash
Locate all files related to a command using whereis command

Here,

  • /usr/bin/bash: Path to the binary file.
  • /usr/share/man/man1/bash.1.gz: Path to the manual page.

2. Search for binary files only

To locate only the binary (executable) file of a command, use the -b flag along with the target command as shown here:

whereis -b command

If I want to search for the binary files for the ls command, then I would use the following:

whereis -b ls
Search for binary files only

3. Search for the manual page only

To locate only the manual page for a specific command, use the -m flag along with the targeted command as shown here:

whereis -m command

For example, if I want to search for the manual page location for the grep command, then I would use the following:

whereis -m grep
Search for manual page only using the whereis command

As you can see, it gave me two locations:

  • /usr/share/man/man1/grep.1.gz: A manual page which can be accessed through man grep command.
  • /usr/share/info/grep.info.gz: An info page that can be accessed through info grep command.

4. Search for source files only

To find only source code files associated with a command, use the -s flag along with the targeted command as shown here:

whereis -s command

For example, if I want to search source files for the gcc, then I would use the following:

whereis -s gcc

My system is fresh and I haven't installed any packages from the source so I was given a blank output.

5. Specify custom directories for searching

To limit your search to specific directories, use options like -B, -S, or -M. For example, if I want to limit my search to the /bin directory for the cp command, then I would use the following command:

whereis -b -B /bin -f cp
Limit search for specific directories using whereis command

Here,

  • -b: This flag tells whereis to search only for binary files (executables), ignoring source and manual files.
  • -B /bin: The -B flag specifies a custom directory (/bin in this case) where whereis should look for binary files. It also limits the search to the /bin directory instead of searching all default directories.
  • -f cp: Without -f, the whereis command would interpret cp as another directory.

6. Identify commands missing certain files (unusual files)

The whereis command can help you find commands that are missing one or more associated files (binary, source, or manual). This is particularly useful for troubleshooting or verifying file completeness.

For example, to search for commands in the /bin directory that is missing manual pages, you first have to change your directory to /bin and then use the -u flag to search for unusual files:

cd /bin
whereis -u -m *
Search for unsual files using whereis command

Wrapping Up...

This was a quick tutorial on how you can use the whereis command in Linux including practical examples and syntax. I hope you will find this guide helpful.

If you have any queries or suggestions, leave us a comment.

by: Sreenath V
Tue, 04 Mar 2025 20:23:37 +0530


Kubernetes is a powerful platform designed to manage and automate the deployment, scaling, and operation of containerized applications. In simple terms, it helps you run and manage your software applications in an organized and efficient way.

kubectl is the command-line tool that helps you manage your Kubernetes cluster. It allows you to deploy applications, manage resources, and get information about your applications. Simply put, kubectl is the main tool you use to communicate with Kubernetes and get things done.

In this article, we will explore essential kubectl commands that will make managing your Kubernetes cluster easier and more efficient.

Essential Kubernetes Concepts

Before diving into the commands, let's quickly review some key Kubernetes concepts to ensure a solid understanding.

  • Pod: The smallest deployable unit in Kubernetes, containing one or more containers that run together on the same node.
  • Node: A physical or virtual machine in the Kubernetes cluster where Pods are deployed.
  • Services: An abstraction that defines a set of Pods and provides a stable network endpoint to access them.
  • Deployment: A controller that manages the desired state and lifecycle of Pods by creating, updating, and deleting them.
  • Namespace: A logical partition in a Kubernetes cluster to isolate and organize resources for different users or teams.

General Command Line Options

This section covers various optional flags and parameters that can be used with different kubectl commands. These options help customize the output format, specify namespaces, filter resources, and more, making it easier to manage and interact with your Kubernetes clusters.

The get command in kubectl is used to retrieve information about Kubernetes resources. It can list various resources such as pods, services, nodes, and more.

To retrieve a list of all the pods in your Kubernetes cluster in JSON format,

kubectl get pods -o json

List all the pods in the current namespace and output their details in YAML format.

kubectl get pods -o yaml

Output the details in plain-text format, including the node name for each pod,

kubectl get pods -o wide

List all the pods in a specific namespace using the -n option:

kubectl get pods -n <namespace_name>

To create a Kubernetes resource from a configuration file, us the command:

kubectl create -f <filename>

To filter logs by a specific label, you can use:

kubectl logs -l <label_filter>

For example, to get logs from all pods labeled app=myapp, you would use:

kubectl logs -l app=myapp

For quick command line help, always use the -h option.

kubectl -h

Create and Delete Kubernetes Resources

In Kubernetes, you can create resources using the kubectl create command, update or apply changes to existing resources with the kubectl apply command, and remove resources with the kubectl delete command. These commands allow you to manage the lifecycle of your Kubernetes resources effectively and efficiently.

The apply and create are two different approaches to create resources in Kubernetes. While the apply follows a declarative approach, create follows an imperative approach.

Learn about these different approaches in our dedicated article.

kubectl apply vs create: What’s the Difference?
Two different approaches for creating resources in Kubernetes cluster. What’s the difference?

To apply a configuration file to a pod, use the command:

 kubectl apply -f <JSON/YAML configuration file>

If you have multiple JSON/YAML configuration files, you can use glob pattern matching here:

 kubectl apply -f '*.json'

To create a new Kubernetes resource using a configuration file,

kubectl create -f <configuration file>

The -f option can receive directory values or configuration file URL to create resource.

kubectl create -f <directory>

OR

kubectl create -f <URL to files>

The delete option is used to delete resources by file names, resources and names, or by resources and label selector.

To delete resources using the type and name specified in the configuration file,

 kubectl delete -f <configuration file>

Cluster Management and Context Commands

Cluster management in Kubernetes refers to the process of querying and managing information about the Kubernetes cluster itself. According to the official documentation, it involves various commands to display endpoint information, view and manage cluster configurations, list API resources and versions, and manage contexts.

The cluster-info command displays the endpoint information about the master and services in the cluster.

kubectl cluster-info

To print the client and server version information for the current context, use:

kubectl version

To display the merged kubeconfig settings,

kubectl config view

To extract and display the names of all users from the kubeconfig file, you can use a jsonpath expression.

kubectl config view -o jsonpath='{.users[*].name}'

Display the current context that kubectl is using,

kubectl config current-context

You can display a list of contexts with the get-context option.

kubectl config get-contexts

To set the default context, use:

kubectl config use-context <context-name>

Print the supported API resources on the server.

kubectl api-resources

It includes core resources like pods, services, and nodes, as well as custom resources defined by users or installed by operators.

You can use the api-versions command to print the supported API versions on the server in the form of "group/version". This command helps you identify which API versions are available and supported by your Kubernetes cluster.

kubectl api-versions

The --all-namespaces option available with the get command can be used to list the requested object(s) across all namespaces. For example, to list all pods existing in all namespaces,

kubectl get pods --all-namespaces

Daemonsets

A DaemonSet in Kubernetes ensures that all (or some) Nodes run a copy of a specified Pod, providing essential node-local facilities like logging, monitoring, or networking services. As nodes are added or removed from the cluster, DaemonSets automatically add or remove Pods accordingly. They are particularly useful for running background tasks on every node and ensuring node-level functionality throughout the cluster.

You can create a new DaemonSet with the command:

kubectl create daemonset <daemonset_name>

To list one or more DaemonSets, use the command:

kubectl get daemonset

The command,

kubectl edit daemonset <daemonset_name>

will open up the specified DaemonSet in the default editor so you can edit and update the definition.

To delete a daemonset,

kubectl delete daemonset <daemonset_name>

You can check the rollout status of a daemonset with the kubectl rollout command:

kubectl rollout status daemonset

The command below provides detailed information about the specified DaemonSet in the given namespace:

kubectl describe ds <daemonset_name> -n <namespace_name>

Deployments

Kubernetes deployments are essential for managing and scaling applications. They ensure that the desired number of application instances are running at all times, making it easy to roll out updates, perform rollbacks, and maintain the overall health of your application by automatically replacing failed instances.

In other words, Deployment allows you to manage updates for Pods and ReplicaSets in a declarative manner. By specifying the desired state in the Deployment configuration, the Deployment Controller adjusts the actual state to match at a controlled pace. You can use Deployments to create new ReplicaSets or replace existing ones while adopting their resources seamlessly. For more details, refer to StatefulSet vs. Deployment.

To list one or more deployments:

kubectl get deployment

To display detailed information about the specified deployment, including its configuration, events, and status,

kubectl describe deployment <deployment_name>

The below command opens the specified deployment configuration in the default editor, allowing you to make changes to its configuration:

kubectl edit deployment <deployment_name>

To create a deployment using kubectl, specify the image to use for the deployment:

kubectl create deployment <deployment_name> --image=<image_name>

You can delete a specified deployment and all of its associated resources, such as Pods and ReplicaSets by using the command:

kubectl delete deployment <deployment_name>

To check the rollout status of the specified deployment and providing information about the progress of the deployment's update process,

kubectl rollout status deployment <deployment_name>

Perform a rolling update in Kubernetes by setting the container image to a new version for a specific deployment.

kubectl set image deployment/<deployment name> <container name>=image:<new image version>

To roll back the specified deployment to the previous revision (undo),

kubectl rollout undo deployment/<deployment name>

The command below will forcefully replace a resource from a configuration file:

kubectl replace --force -f <configuration file>

Retrieving and Filtering Events

In Kubernetes, events are a crucial component for monitoring and diagnosing the state of your cluster. They provide real-time information about changes and actions happening within the system, such as pod creation, scaling operations, errors, and warnings.

Use the command:

kubectl get events

To retrieve and list recent events for all resources in the system, providing valuable information about what has happened in your cluster.

To filter and list only the events of type "Warning," thereby providing insights into any potential issues or warnings in your cluster,

kubectl get events --field-selector type=Warning

You can retrieve and list events sorted by their creation timestamp. This allows you to view events in chronological order.

kubectl get events --sort-by=.metadata.creationTimestamp

To lists events, excluding those related to Pod events,

kubectl get events --field-selector involvedObject.kind!=Pod

This helps you focus on events for other types of resources.

To list events specifically for a node with the given name,

kubectl get events --field-selector involvedObject.kind=Node, involvedObject.name=<node_name>

You can filter events, excluding those that are of the "Normal" type, allowing you to focus on warning and error events that may require attention:

kubectl get events --field-selector type!=Normal

Managing Logs

Logs are essential for understanding the real-time behavior and performance of your applications. They provide a record of activity and outputs generated by containers and pods, which can be invaluable for debugging and monitoring purposes.

To print the logs for the specified pod:

kubectl logs <pod_name>

To print the logs for the specified pod since last hour:

kubectl logs --since=1h <pod_name>

You can read the most recent 50 lines of logs for the specified pod using the --tail option.

kubectl logs --tail=50 <pod_name>

The command below streams and continuously displays the logs of the specified pod, optionally filtered by the specified container:

kubectl logs -f <pod_name> [-c <container_name>]

For example, as per the official documentation,

kubectl logs -f -c ruby web-1

Begin streaming the logs of the ruby container in pod web-1.

To continuously display the logs of the specified pod in real-time,

kubectl logs -f <pod_name>

You can fetch the logs up to the current point in time for a specific container within the specified pod using the command:

kubectl logs -c <container_name> <pod_name>

To save the logs for the specified pod to a file,

kubectl logs <pod_name> > pod.log

To print the logs for the previous instance of the specified pod:

kubectl logs --previous <pod_name>

This is particularly useful for troubleshooting and analyzing logs from a previously failed pod.

Namespaces

In Kubernetes, namespaces are used to divide and organize resources within a cluster, creating separate environments for different teams, projects, or applications. This helps in managing resources, access permissions, and ensuring that each group or application operates independently and securely.

To create a new namespace with the specified name in your Kubernetes cluster:

kubectl create namespace <namespace_name>

To list all namespaces in your Kubernetes cluster, use the command:

kubectl get namespaces

You can get a detailed description of the specified namespace, including its status, resource quotas using the command:

kubectl describe namespace <namespace_name>

To delete the specified namespace along with all the resources contained within it:

kubectl delete namespace <namespace_name>

The command

kubectl edit namespace <namespace_name>

opens the default editor on your machine with the configuration of the specified namespace, allowing you to make changes directly.

To display resource usage (CPU and memory) for all pods within a specific namespace, you can use the following command:

kubectl top pods --namespace=<namespace_name>

Nodes

In Kubernetes, nodes are the fundamental building blocks of the cluster, serving as the physical or virtual machines that run your applications and services.

To update the taints on one or more nodes,

kubectl taint node <node_name>

List all nodes in your Kubernetes cluster:

kubectl get node

Remove a specific node from your Kubernetes cluster,

kubectl delete node <node_name>

Display resource usage (CPU and memory) for all nodes in your Kubernetes cluster:

kubectl top nodes

List all pods running on a node with a specific name:

kubectl get pods -o wide | grep <node_name>

Add or update annotations on a specific node:

kubectl annotate node <node_name> <key>=<value>
📋
Annotations are key-value pairs that can be used to store arbitrary non-identifying metadata.

Mark a node as unschedulable (no new pods will be scheduled on the specified node).

kubectl cordon node <node_name>

Mark a previously cordoned (unschedulable) node as schedulable again:

kubectl uncordon node <node_name>

Safely evict all pods from the specified node in preparation for maintenance or decommissioning:

kubectl drain node <node_name>

Add or update labels on a specific node in your Kubernetes cluster:

kubectl label node <node_name> <key>=<value>

Pods

A pod is the smallest and simplest unit in the Kubernetes object model that you can create or deploy. A pod represents a single instance of a running process in your cluster and can contain one or more containers. These containers share the same network namespace, storage volumes, and lifecycle, allowing them to communicate with each other easily and share resources.

Pods are designed to host tightly coupled application components and provide a higher level of abstraction for deploying, scaling, and managing applications in a Kubernetes environment. Each pod is scheduled on a node, where the containers within it are run and managed together as a single, cohesive unit.

List all pods in your Kubernetes cluster:

kubectl get pods

List all pods in your Kubernetes cluster and sort them by the restart count of the first container in each pod:

kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

List all pods in your Kubernetes cluster that are currently in the "Running" phase:

kubectl get pods --field-selector=status.phase=Running

Delete a specific pod from your Kubernetes cluster:

kubectl delete pod <pod_name>

Display detailed information about a specific pod in your Kubernetes cluster:

kubectl describe pod <pod_name>

Create a pod using the specifications provided in a YAML file:

kubectl create -f pod.yaml

OR

kubectl apply -f pod.yaml

To execute a command in a specific container within a pod in your Kubernetes cluster:

kubectl exec <pod_name> -c <container_name> <command>

Start an interactive shell session in a container within a specified pod:

# For Single Container Pods
kubectl exec -it <pod_name> -- /bin/sh

# For Multi-container pods,
kubectl exec -it <pod_name> -c <container_name> -- /bin/sh

Display resource (CPU and memory) usage statistics for all pods in your Kubernetes cluster:

kubectl top pods

Add or update annotations on a specific pod:

kubectl annotate pod <pod_name> <key>=<value>

To add or update the label of a pod:

kubectl label pod <pod_name> new-label=<label name>

List all pods in your Kubernetes cluster and display their labels:

kubectl get pods --show-labels

Forward one or more local ports to a pod in your Kubernetes cluster, allowing you to access the pod's services from your local machine:

kubectl port-forward <pod_name> <port_number_to_listen_on>:<port_number_to_forward_to>

Replication Controllers

Replication Controller (RC) ensures that a specified number of pod replicas are running at any given time. If any pod fails or is deleted, the Replication Controller automatically creates a replacement. This self-healing mechanism enables high availability and scalability of applications.

To list all Replication Controllers in your Kubernetes cluster

kubectl get rc

List all Replication Controllers within a specific namespace:

kubectl get rc --namespace=”<namespace_name>”

ReplicaSets

ReplicaSet is a higher-level concept that ensures a specified number of pod replicas are running at any given time. It functions similarly to a Replication Controller but offers more powerful and flexible capabilities.

List all ReplicaSets in your Kubernetes cluster.

kubectl get replicasets

To display detailed information about a specific ReplicaSet:

kubectl describe replicasets <replicaset_name>

Scale the number of replicas for a specific resource, such as a Deployment, ReplicaSet, or ReplicationController, in your Kubernetes cluster.

kubectl scale --replicas=<number_of_replicas> <resource_type>/<resource_name>

Secrets

Secrets are used to store and manage sensitive information such as passwords, tokens, and keys.

Unlike regular configuration files, Secrets help ensure that confidential data is securely handled and kept separate from application code.

Secrets can be created, managed, and accessed within the Kubernetes environment, providing a way to distribute and use sensitive data without exposing it in plain text.

To create a Secret,

kubectl create secret (docker-registry | generic | tls)

List all Secrets in your Kubernetes cluster:

kubectl get secrets

Display detailed information about a specific Secret:

kubectl describe secret <secret_name>

Delete a specific Secret from your Kubernetes cluster:

kubectl delete secret <secret_name>

Services

Services act as stable network endpoints for a group of pods, allowing seamless communication within the cluster. They provide a consistent way to access pods, even as they are dynamically created, deleted, or moved.

By using a Service, you ensure that your applications can reliably find and interact with each other, regardless of the underlying pod changes.

Services can also distribute traffic across multiple pods, providing load balancing and improving the resilience of your applications.

To list all Services in your Kubernetes cluster:

kubectl get services

To display detailed information about a specific Service:

kubectl describe service <service_name>

Create a Service that exposes a deployment:

kubectl expose deployment <deployment_name> --port=<port> --target-port=<target_port> --type=<type>

Edit the configuration of a specific Service:

kubectl edit service <service_name>

Service Accounts

Service Accounts provide an identity for processes running within your cluster, enabling them to interact with the Kubernetes API and other resources. By assigning specific permissions and roles to Service Accounts, you can control access and limit the actions that pods and applications can perform, enhancing the security and management of your cluster.

Service Accounts are essential for managing authentication and authorization, ensuring that each component operates with the appropriate level of access and adheres to the principle of least privilege.

To list all Service Accounts in your Kubernetes cluster:

kubectl get serviceaccounts

Display detailed information about a specific Service Account:

kubectl describe serviceaccount <serviceaccount_name>

Next is replacing a service account. Before replacing, you need to export the existing Service Account definition to a YAML file.

kubectl get serviceaccount <serviceaccount_name> -o yaml > serviceaccount.yaml

Once you made changes to the YAML file, replace the existing Service Account with the modified one:

kubectl replace -f serviceaccount.yaml

Delete a specific Service Account from your Kubernetes cluster:

kubectl delete serviceaccount <service_account_name>

StatefulSet

StatefulSet is a specialized workload controller designed for managing stateful applications. Unlike Deployments, which are suitable for stateless applications, StatefulSets provide guarantees about the ordering and uniqueness of pods.

Each pod in a StatefulSet is assigned a unique, stable identity and is created in a specific order. This ensures consistency and reliability for applications that require persistent storage, such as databases or distributed systems.

StatefulSets also facilitate the management of pod scaling, updates, and rollbacks while preserving the application's state and data.

To list all StatefulSets in your Kubernetes cluster:

kubectl get statefulsets

To delete a specific StatefulSet from your Kubernetes cluster without deleting the associated pods:

kubectl delete statefulset <stateful_set_name> --cascade=false

💬 Hope you like this quick overview of the kubectl commands. Please let me know if you have any questions or suggestions.

by: Abhishek Prakash
Fri, 28 Feb 2025 19:22:01 +0530


What career opportunities are available for someone starting with Linux? I am talking about entering this field and that's why I left out roles like SRE from this list. I would appreciate your feedback on it if you are already working in the IT industry. Let's help out our juniors.

What Kind of Job Can You Get if You Learn Linux?
While there are tons of job roles created around Linux, here are the ones that you can choose for an entry level career.

Here are the other highlights of this edition of LHB Linux Digest:

  • Zed IDE
  • Essential Docker commands
  • Self hosted project management tool
  • And more tools, tips and memes for you
  • This edition of LHB Linux Digest newsletter is supported by PikaPods.

📖 Linux Tips and Tutorials

Learn to increase (or perhaps decrease) swap on Ubuntu Linux. This should work on other distros too if they use swap file instead of swap partition.

How to Increase Swap Size on Ubuntu Linux
In this quick tip, you’ll learn to increase the swap size on Ubuntu and other Linux distributions.
by: Abhishek Prakash
Thu, 20 Feb 2025 17:48:14 +0530


Linux is the foundation of many IT systems, from servers to cloud platforms. Mastering Linux and related tools like Docker, Kubernetes, and Ansible can unlock career opportunities in IT, system administration, networking, and DevOps.

I mean, that's one of the reasons why many people use Linux.

The next question is, what kinds of job roles can you get if you want to begin a career with Linux?

Let me share the job roles, required skills, certifications, and resources to help you transition into a Linux-based career.

📋
There are many more kinds of job roles out there. Cloud Engineer, Site Reliability Engineer (SRE) etc. The ones I discuss here are primarily focused on entry level roles.

1. IT Technician

IT Technicians are responsible for maintaining computer systems, troubleshooting hardware/software issues, and supporting organizational IT needs.

They ensure smooth daily operations by resolving technical problems efficiently. So if you are a beginner and just want to get started in IT field, IT technician is one of the most basic yet important roles.

Responsibilities:

  • Install and configure operating systems, software, and hardware.
  • Troubleshoot system errors and repair equipment.
  • Provide user support for technical issues.
  • Monitor network performance and maintain security protocols.

Skills Required:

  • Basic Linux knowledge (file systems, permissions).
  • Networking fundamentals (TCP/IP, DNS).
  • Familiarity with common operating systems like Windows and MacOS.

Certifications:

  • CompTIA Linux+ (XK0-005): Validates foundational Linux skills such as system management, security, scripting, and troubleshooting. Recommended for entry-level roles.
  • CompTIA A+: Focuses on hardware/software troubleshooting and is ideal for beginners.
📋
These are absolute entry-level job role and some would argue that this role is shrinking or at least there won't be as many opportunities as it used to be earlier. Also, it might not be a high-paying job.

2. System Administrator

System administrators manage servers, networks, and IT infrastructure and on a personal level, this is my favourite role.

Being a System admin, you are supposed to ensure system reliability, security, and efficiency by configuring software/hardware and automating repetitive tasks.

Responsibilities:

  • Install and manage operating systems (e.g., Linux).
  • Set up user accounts and permissions.
  • Monitor system performance and troubleshoot outages.
  • Implement security measures like firewalls.

Skills Required:

  • Proficiency in Linux commands and shell scripting.
  • Experience with configuration management tools (e.g., Ansible).
  • Knowledge of virtualization platforms (e.g., VMware).

Certifications:

  • Red Hat Certified System Administrator (RHCSA): Focuses on core Linux administration tasks such as managing users, storage configuration, basic container management, and security.
  • LPIC-1: Linux Administrator: Covers fundamental skills like package management and networking.
📋
This is a classic Linux job role. Although, the opportunities started shrinking as the 'cloud' took over. This is why RHCSA and other sysadmin certifications have started including topics like Ansible in the mix.

3. Network Engineer

Being a network engineer, you are responsible for designing, implementing, and maintaining an organization's network infrastructure. In simple terms, you will be called first if there is any network-related problem ranging from unstable networks to misconfigured networks.

Responsibilities:

  • Configure routers, switches, firewalls, and VPNs.
  • Monitor network performance for reliability.
  • Implement security measures to protect data.
  • Document network configurations.

Skills Required:

  • Advanced knowledge of Linux networking (firewalls, IP routing).
  • Familiarity with protocols like BGP/OSPF.
  • Scripting for automation (Python or Bash).

Certifications:

  • Cisco Certified Network Associate (CCNA): Covers networking fundamentals such as IP connectivity, network access, automation, and programmability. It’s an entry-level certification for networking professionals.
  • CompTIA Network+: Focuses on troubleshooting network issues and implementing secure networks.
📋
A classic Linux-based job role that goes deep into networking. Many enterprises have their in-house network engineers. Other than that, data centers and cloud providers also employ network engineers.

4. DevOps Engineer

DevOps Engineers bridge development and operations teams to streamline software delivery. This is more of an advanced role where you will be focusing on automation tools like Docker for containerization and Kubernetes for orchestration.

Responsibilities:

  • Automate CI/CD pipelines using tools like Jenkins.
  • Deploy containerized applications using Docker.
  • Manage Kubernetes clusters for scalability.
  • Optimize cloud-based infrastructure (AWS/Azure).

Skills Required:

  • Strong command-line skills in Linux.
  • Proficiency in DevOps tools (e.g., Terraform).
  • Understanding of cloud platforms.

Certifications:

  • Certified Kubernetes Administrator (CKA): Validates expertise in managing Kubernetes clusters by covering topics like installation/configuration, networking, storage management, and troubleshooting.
  • AWS Certified DevOps Engineer – Professional: Focuses on automating AWS deployments using DevOps practices.
📋
Newest but the most in-demand job role these days. A certification like CKA and CKD can help you skip the queue and get the job. It also pays more than other discussed roles here.
Linux for DevOps: Essential Knowledge for Cloud Engineers
Learn the essential concepts, command-line operations, and system administration tasks that form the backbone of Linux in the DevOps world.
Certification Role Key Topics Covered Cost Validity
CompTIA Linux+ IT Technician System management, security basics, scripting $207 3 Years
Red Hat Certified System Admin System Administrator User management, storage configuration, basic container management $500 3 Years
Cisco CCNA Network Engineer Networking fundamentals including IP connectivity/security $300 3 Years
Certified Kubernetes Admin DevOps Engineer Cluster setup/management, troubleshooting Kubernetes environments $395 3 Years
Linux Foundation Kubernetes Certification Discount

Skills required across roles

Here, I have listed the skills that are required for all the 4 roles listed above:

Core skills:

  1. Command-line proficiency: Navigating file systems and managing processes.
  2. Networking basics: Understanding DNS, SSH, and firewalls.
  3. Scripting: Automating tasks using Bash or Python.

Advance skills:

  1. Configuration management: Tools like Ansible or Puppet.
  2. Containerization: Docker for packaging applications.
  3. Orchestration: Kubernetes for managing containers at scale.

Free resources to Learn Linux

For beginners:

  1. Bash Scripting for Beginners: Our in-house free course for command-line basics.
  2. Linux Foundation Free Courses: Covers Linux basics like command-line usage.
  3. LabEx: Offers hands-on labs for practising Linux commands.
  4. Linux for DevOps: Essential Linux knowledge for cloud and DevOps engineers.
  5. Learn Docker: Our in-house effort to provide basic Docker tutorials for free.

For advanced topics:

  1. KodeKloud: Interactive courses on Docker/Kubernetes with real-world scenarios.
  2. Coursera: Free trials for courses like "Linux Server Management."
  3. RHCE Ansible EX294 Exam Preparation Course: Our editorial effort is to provide a free Ansible course covering basic to advanced Ansible.

Conclusion

I would recommend you start by mastering the basics of Linux commands before you dive into specialized tools like Docker or Kubernetes.

We have a complete course on Linux command line fundamentals. No matter which role you are preparing for, you cannot ignore the basics.

Linux for DevOps: Essential Knowledge for Cloud Engineers
Learn the essential concepts, command-line operations, and system administration tasks that form the backbone of Linux in the DevOps world.

Use free resources to build your knowledge base and validate your skills through certifications tailored to your career goals. With consistent learning and hands-on practice, you can secure a really good role in the tech industry!

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.