- 0 Comments
- 1257 views
Welcome to CodeNameJessica
✨ Welcome to CodeNameJessica! ✨
💻 Where tech meets community.
Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.
🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
✅ In-depth discussions on Linux, Security, Server Administration, Programming, and more
✅ Exclusive resources, tools, and scripts for IT professionals
✅ A supportive community of like-minded individuals to share ideas, solve problems, and learn together
✅ Project showcases, guides, and tutorials from our members
✅ Personalized profiles and direct messaging to collaborate with other techies
🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.
Blogs
Featured Entries
- 0 Comments
- 1280 views
- 0 Comments
- 1305 views
- 0 Comments
- 1907 views
- 0 Comments
- 418 views
Our community blogs
-
by: Sourav Rudra
Mon, 07 Jul 2025 13:13:17 GMT
Most file sharing today takes place through cloud services, but that's not always necessary. Local file transfers are still relevant, letting people send files directly between devices on the same network without involving a nosy middleman (a server, in this case).
Instead of uploading confidential documents on WhatsApp and calling it a day, people could share them directly over their local network. This approach is faster, more private, and more reliable than relying on a third-party server.
Remember, if you value your data, so does Meta. 🕵️♂️
That’s where Packet comes in, offering an easy, secure way to transfer files directly between Linux and Android devices.
Wireless File Transfers via Quick Share
It is a lightweight, open source app for Linux that makes transferring files effortless. It leverages a partial implementation of Google's Quick Share protocol (proprietary) to enable easy wireless transfers over your local Wi-Fi network (via mDNS) without needing any cables or cloud servers.
In addition to that, Packet supports device discovery via Bluetooth, making it easy to find nearby devices without manual setup. It can also be integrated with GNOME’s Nautilus file manager (Files), allowing you to send files directly from your desktop with a simple right-click (requires additional configuration).
⭐ Key Features
- Quick Share Support
- Local, Private Transfers
- File Transfer Notifications
- Nautilus Integration for GNOME
How to Send Files Using Packet?
First things first, you have to download and install the latest release of Packet from Flathub by running this command in your terminal:
flatpak install flathub io.github.nozwock.Packet
Once launched, sending files from your Linux computer to your Android smartphone is straightforward. Enable Bluetooth on your laptop/computer, then click on the big blue "Add Files" button and select the files you want to send.
Adding new files for transfer to Packet is easy.
You can also drag and drop files directly into Packet for a quicker sharing experience. If you are looking to transfer a whole folder, it’s best to first compress it into an archive like a TAR or ZIP, then send it through Packet for transmission.
Once you are done choosing files, choose your Android phone from the recipients list and verify the code shown on screen.
File transfers from Linux to Android are lightning fast!
Though, before you do all that, ensure that Quick Share is set up on your smartphone to allow Nearby sharing with everyone. Additionally, take note of your device’s name; this is how it will appear on your Linux machine when sending/receiving files.
When you start the transfer, your smartphone will prompt you to "Accept" or "Decline" the Quick Share request. Only proceed if the PIN or code shown on both devices matches to ensure a secure transfer.
Transferring files the other way around, from Android to Linux, is just as simple. On your Android device, select the files you want to share, tap the "Share" button, and choose "Quick Share". Your Linux computer should appear in the list if Packet is running and your device is discoverable.
File transfers from Android to Linux are the same!
You can change your Linux device’s name from the "Preferences" menu in Packet (accessible via the hamburger menu). This is the name that will show up on your Android device when sharing files.
Packet also shows handy system notifications for file transfers, so you don’t miss a thing.
Packet shows helpful notifications and lets you change a few basic settings.
If you use the GNOME Files app (Nautilus), then there’s an optional plugin that adds a "Send with Packet" option to the right-click menu, making it even easier to share files without opening the app manually.
Overall, Packet feels like a practical tool for local file sharing between devices. It works well across Android and Linux devices, and can do the same for two Linux devices on the same network.
And, I must say, it gives tough competition to LocalSend, another file transfer tool that’s an AirDrop alternative for Linux users!
Suggested Read 📖
LocalSend: An Open-Source AirDrop Alternative For Everyone!It’s time to ditch platform-specific solutions like AirDrop!Recent Entries
-
Latest entry by Blogger,
By: Edwin
Wed, 30 Apr 2025 13:08:34 +0000A lot of people want Linux but do not want to go either remove Windows or take up the overwhelming task of dual booting. For those people, WSL (Windows Subsystem for Linux) came as a blessing. WSL lets you run Linux on your Windows device without the overhead of a Virtual Machine (VM). But in some cases where you want to fix a problem or simply do not want WSL anymore, you may have to uninstall WSL from your Windows system.
Here is step-by-step guide to remove WSL from your Windows system, remove any Linux distribution, delete all related files, and clear up some disk space. Ready? Get. Set. Learn!
What is WSL
You probably knew by now that we will always start with the basics i.e., what WSL does. Think of WSL as a compatibility layer for running Linux binaries on Microsoft Windows systems. It comes in two versions:
- WSL 1: Uses a translation layer between Linux and Windows.
- WSL 2: Uses a real Linux kernel in a lightweight VM.
All around the world, WSL is a favourite among developers, system administrators, and students for running Linux tools like bash, ssh, grep, awk, and even Docker. But if you have moved to a proper Linux system or just want to do a clean reinstall, here are the instructions to remove WSL completely without any errors.
Step 1: How to Uninstall Linux Distributions
The first step to uninstall WSL completely is to remove all installed Linux distributions.
Check Installed Distros
To check for the installed Linux distributions, open PowerShell or Command Prompt and run the command:
wsl --list --all
After executing this command, you will see a list of installed distros, such as:
- Ubuntu
- Debian
- Kali
- Alpine
How to Uninstall a Linux Distro
To uninstall a distro like Ubuntu, follow these instructions:
- Press Windows key + I to open Settings window.
- Go to Apps, then click Installed Apps (or Apps & Features).
- Search for your distro and click Uninstall.
Repeat for all distros you no longer need. If you plan to uninstall WSL completely, we recommend removing all distros.
if you prefer PowerShell, run these commands
wsl --unregister <DistroName>
For example, if you want to remove Ubuntu, execute the command:
wsl --unregister Ubuntu
This removes the Linux distro and all its associated files.
Step 2: Uninstall WSL Components
Once we have removed the unwanted distros, let us uninstall the WSL platform itself.
- Open Control Panel and navigate to Programs and then click Turn Windows features on or off.
- Uncheck these boxes:
- Windows Subsystem for Linux
- Virtual Machine Platform (used by WSL 2)
- Windows Hypervisor Platform (optional)
- Click OK and restart your system.
Step 3: Remove WSL Files and Cache
Even after uninstalling WSL and Linux distributions, some data might remain. Here are the instructions to delete WSL’s cached files and reclaim disk space.
To delete the WSL Folder, open File Explorer and go to:
%USERPROFILE%\AppData\Local\Packages
Look for folders like:
- CanonicalGroupLimited…Ubuntu
- Debian…
- KaliLinux…
Delete any folders related to WSL distros you removed.
Step 4: Remove WSL CLI Tool (Optional)
If you installed WSL using the Microsoft Store (i.e., “wsl.exe” package), you can also uninstall it directly from the Installed Apps section:
- Go to Settings, and then to Apps and then open Installed Apps.
- Search for Windows Subsystem for Linux.
- Click Uninstall.
Step 5: Clean Up with Disk Cleanup Tool
Finally, use the built-in Disk Cleanup utility to clear any temporary files.
- Press “Windows key + S and search for Disk Cleanup.
- Choose your system drive (usually drive C:).
- Select options like:
- Temporary files
- System created Windows error reporting
- Delivery optimization files
- Click OK to clean up.
Bonus Section: How to Reinstall WSL (Optional)
If you are removing WSL due to issues or conflicts, you can always do a fresh reinstall.
Here is how you can install latest version of WSL via PowerShell
wsl --install
This installs WSL 2 by default, along with Ubuntu.
Wrapping Up
Uninstalling WSL may sound tricky, but by following these steps, you can completely remove Linux distributions, WSL components, and unwanted files from your system. Whether you are making space for something new or just doing some digital spring cleaning, this guide ensures that WSL is uninstalled safely and cleanly.
If you ever want to come back to the Linux world, WSL can be reinstalled with a single command, which we have covered as a precaution. Let us know if you face any errors. Happy learning!
The post Uninstall WSL: Step-by-Step Simple Guide appeared first on Unixmen.
Recent Entries
-
Latest entry by Blogger,
by: Temani Afif
Mon, 07 Jul 2025 12:48:29 +0000This is the fourth post in a series about the new CSS
shape()
function. So far, we’ve covered the most common commands you will use to draw various shapes, including lines, arcs, and curves. This time, I want to introduce you to two more commands:close
andmove
. They’re fairly simple in practice, and I think you will rarely use them, but they are incredibly useful when you need them.Better CSS Shapes Using
shape()
- Lines and Arcs
- More on Arcs
- Curves
- Close and Move (you are here!)
The
close
commandIn the first part, we said that
shape()
always starts with afrom
command to define the first starting point but what about the end? It should end with aclose
command.But you never used any
close
command in the previous articles!?That’s true. I never did because I either “close” the shape myself or rely on the browser to “close” it for me. Said like that, it’s a bit confusing, but let’s take a simple example to better understand:
clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%)
If you try this code, you will get a triangle shape, but if you look closely, you will notice that we have only two line commands whereas, to draw a triangle, we need a total of three lines. The last line between
100% 100%
and0 0
is implicit, and that’s the part where the browser is closing the shape for me without having to explicitly use aclose
command.I could have written the following:
clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, close)
Or instead, define the last line by myself:
clip-path: shape(from 0 0, line to 100% 0, line to 100% 100%, line to 0 0)
But since the browser is able to close the shape alone, there is no need to add that last
line
command nor do we need to explicitly add theclose
command.This might lead you to think that the
close
command is useless, right? It’s true in most cases (after all, I have written three articles aboutshape()
without using it), but it’s important to know about it and what it does. In some particular cases, it can be useful, especially if used in the middle of a shape.In this example, my starting point is the center and the logic of the shape is to draw four triangles. In the process, I need to get back to the center each time. So, instead of writing
line to center
, I simply writeclose
and the browser will automatically get back to the initial point!Intuitively, we should write the following:
clip-path: shape( from center, line to 20% 0, hline by 60%, line to center, /* triangle 1 */ line to 100% 20%, vline by 60%, line to center, /* triangle 2 */ line to 20% 100%, hline by 60%, line to center, /* triangle 3 */ line to 0 20%, vline by 60% /* triangle 4 */ )
But we can optimize it a little and simply do this instead:
clip-path: shape( from center, line to 20% 0, hline by 60%, close, line to 100% 20%, vline by 60%, close, line to 20% 100%, hline by 60%, close, line to 0 20%, vline by 60% )
We write less code, sure, but another important thing is that if I update the
center
value with another position, theclose
command will follow that position.Don’t forget about this trick. It can help you optimize a lot of shapes by writing less code.
The
move
commandLet’s turn our attention to another
shape()
command you may rarely use, but can be incredibly useful in certain situations: themove
command.Most times when we need to draw a shape, it’s actually one continuous shape. But it may happen that our shape is composed of different parts not linked together. In these situations, the
move
command is what you will need.Let’s take an example, similar to the previous one, but this time the triangles don’t touch each other:
Intuitively, we may think we need four separate elements, with its own
shape()
definition. But the that example is a single shape!The trick is to draw the first triangle, then “move” somewhere else to draw the next one, and so on. The
move
command is similar to thefrom
command but we use it in the middle ofshape()
.clip-path: shape( from 50% 40%, line to 20% 0, hline by 60%, close, /* triangle 1 */ move to 60% 50%, line to 100% 20%, vline by 60%, close, /* triangle 2 */ move to 50% 60%, line to 20% 100%, hline by 60%, close, /* triangle 3 */ move to 40% 50%, line to 0 20%, vline by 60% /* triangle 4 */ )
After drawing the first triangle, we “close” it and “move” to a new point to draw the next triangle. We can have multiple shapes using a single
shape()
definition. A more generic code will look like the below:clip-path: shape( from X1 Y1, ..., close, /* shape 1 */ move to X2 Y2, ..., close, /* shape 2 */ ... move to Xn Yn, ... /* shape N */ )
The
close
commands before themove
commands aren’t mandatory, so the code can be simplified to this:clip-path: shape( from X1 Y1, ..., /* shape 1 */ move to X2 Y2, ..., /* shape 2 */ ... move to Xn Yn, ... /* shape N */ )
Let’s look at a few interesting use cases where this technique can be helpful.
Cut-out shapes
Previously, I shared a trick on how to create cut-out shapes using
clip-path: polygon()
. Starting from any kind of polygon, we can easily invert it to get its cut-out version:We can do the same using
shape()
. The idea is to have an intersection between the main shape and the rectangle shape that fits the element boundaries. We need two shapes, hence the need for themove
command.The code is as follows:
.shape { clip-path: shape(from ...., move to 0 0, hline to 100%, vline to 100%, hline to 0); }
You start by creating your main shape and then you “move” to
0 0
and you create the rectangle shape (Remember, It’s the first shape we create in the first part of this series). We can even go further and introduce a CSS variable to easily switch between the normal shape and the inverted one..shape { clip-path: shape(from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; }
By default,
--i
is not defined sovar(--i,)
will be empty and we get the main shape. If we define the variable with the rectangle shape, we get the inverted version.Here is an example using a rounded hexagon shape:
In reality, the code should be as follows:
.shape { clip-path: shape(evenodd from .... var(--i,)); } .invert { --i:,move to 0 0, hline to 100%, vline to 100%, hline to 0; }
Notice the
evenodd
I am adding at the beginning ofshape()
. I won’t bother you with a detailed explanation on what it does but in some cases, the inverted shape is not visible and the fix is to addevenodd
at the beginning. You can check the MDN page for more details.Another improvement we can do is to add a variable to control the space around the shape. Let’s suppose you want to make the hexagon shape of the previous example smaller. It‘s tedious to update the code of the hexagon but it’s easier to update the code of the rectangle shape.
.shape { clip-path: shape(evenodd from ... var(--i,)) content-box; } .invert { --d: 20px; padding: var(--d); --i: ,move to calc(-1*var(--d)) calc(-1*var(--d)), hline to calc(100% + var(--d)), vline to calc(100% + var(--d)), hline to calc(-1*var(--d)); }
We first update the reference box of the shape to be
content-box
. Then we add some padding which will logically reduce the area of the shape since it will no longer include the padding (nor the border). The padding is excluded (invisible) by default and here comes the trick where we update the rectangle shape to re-include the padding.That is why the
--i
variable is so verbose. It uses the value of the padding to extend the rectangle area and cover the whole element as if we didn’t havecontent-box
.Not only you can easily invert any kind of shape, but you can also control the space around it! Here is another demo using the CSS-Tricks logo to illustrate how easy the method is:
This exact same example is available in my SVG-to-CSS converter, providing you with the
shape()
code without having to do all of the math.Repetitive shapes
Another interesting use case of the
move
command is when we need to repeat the same shape multiple times. Do you remember the difference between theby
and theto
directives? Theby
directive allows us to define relative coordinates considering the previous point. So, if we create our shape using onlyby
, we can easily reuse the same code as many times as we want.Let’s start with a simple example of a circle shape:
clip-path: shape(from X Y, arc by 0 -50px of 1%, arc by 0 50px of 1%)
Starting from
X Y
, I draw a first arc moving upward by50px
, then I get back toX Y
with another arc using the same offset, but downward. If you are a bit lost with the syntax, try reviewing Part 1 to refresh your memory about thearc
command.How I drew the shape is not important. What is important is that whatever the value of
X Y
is, I will always get the same circle but in a different position. Do you see where I am going with this idea? If I want to add another circle, I simply repeat the same code with a differentX Y
.clip-path: shape( from X1 Y1, arc by 0 -50px of 1%, arc by 0 50px of 1%, move to X2 Y2, arc by 0 -50px of 1%, arc by 0 50px of 1% )
And since the code is the same, I can store the circle shape into a CSS variable and draw as many circles as I want:
.shape { --sh:, arc by 0 -50px of 1%, arc by 0 50px of 1%; clip-path: shape( from X1 Y1 var(--sh), move to X2 Y2 var(--sh), ... move to Xn Yn var(--sh) ) }
You don’t want a circle? Easy, you can update the
--sh
variable with any shape you want. Here is an example with three different shapes:And guess what? You can invert the whole thing using the cut-out technique by adding the rectangle shape at the end:
This code is a perfect example of the
shape()
function’s power. We don’t have any code duplication and we can simply adjust the shape with CSS variables. This is something we are unable to achieve with thepath()
function because it doesn’t support variables.Conclusion
That’s all for this fourth installment of our series on the CSS
shape()
function! We didn’t make any super complex shapes, but we learned how two simple commands can open a lot of possibilities of what can be done usingshape()
.Just for fun, here is one more demo recreating a classic three-dot loader using the last technique we covered. Notice how much further we could go, adding things like animation to the mix:
Better CSS Shapes Using
shape()
- Lines and Arcs
- More on Arcs
- Curves
- Close and Move (you are here!)
Better CSS Shapes Using shape() — Part 4: Close and Move originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Recent Entries
-
SaltStack (SALT): A Comprehensive Overview
SaltStack, commonly referred to as SALT, is a powerful open-source infrastructure management platform designed for scalability. Leveraging event-driven workflows, SALT provides an adaptable solution for automating configuration management, remote execution, and orchestration across diverse infrastructures.
This document offers an in-depth guide to SALT for both technical teams and business stakeholders, demystifying its features and applications.
What is SALT?
SALT is a versatile tool that serves multiple purposes in infrastructure management:
Configuration Management Tool (like Ansible, Puppet, Chef): Automates the setup and maintenance of servers and applications.
Remote Execution Engine (similar to Fabric or SSH): Executes commands on systems remotely, whether targeting a single node or thousands.
State Enforcement System: Ensures systems maintain desired configurations over time.
Event-Driven Automation Platform: Detects system changes and triggers actions in real-time.
Key Technologies:
YAML: Used for defining states and configurations in a human-readable format.
Jinja: Enables dynamic templating for YAML files.
Python: Provides extensibility through custom modules and scripts.
Supported Architectures
SALT accommodates various architectures to suit organizational needs:
Master/Minion: A centralized control model where a Salt Master manages Salt Minions to send commands and execute tasks.
Masterless: A decentralized approach using
salt-ssh
to execute tasks locally without requiring a master node.
Core Components of SALT
Component
Description
Salt Master
Central control node that manages minions, sends commands, and orchestrates infrastructure tasks.
Salt Minion
Agent installed on managed nodes that executes commands from the master.
Salt States
Declarative YAML configuration files that define desired system states (e.g., package installations).
Grains
Static metadata about a system (e.g., OS version, IP address), useful for targeting specific nodes.
Pillars
Secure, per-minion data storage for secrets and configuration details.
Runners
Python modules executed on the master to perform complex orchestration tasks.
Reactors
Event listeners that trigger actions in response to system events.
Beacons
Minion-side watchers that emit events based on system changes (e.g., file changes or CPU spikes).
Key Features of SALT
Feature
Description
Agent or Agentless
SALT can operate in agent (minion-based) or agentless (masterless) mode.
Scalability
Capable of managing tens of thousands of nodes efficiently.
Event-Driven
Reacts to real-time system changes via beacons and reactors, enabling automation at scale.
Python Extensibility
Developers can extend modules or create custom ones using Python.
Secure
Employs ZeroMQ for communication and AES encryption for data security.
Role-Based Config
Dynamically applies configurations based on server roles using grains metadata.
Granular Targeting
Targets systems using name, grains, regex, or compound filters for precise management.
Common Use Cases
SALT is widely used across industries for tasks like:
Provisioning new systems and applying base configurations.
Enforcing security policies and managing firewall rules.
Installing and enabling software packages (e.g., HTTPD, Nginx).
Scheduling and automating patching across multiple environments.
Monitoring logs and system states with automatic remediation for issues.
Managing VM and container lifecycles (e.g., Docker, LXC).
Real-World Examples
Remote Command Execution:
salt '*' test.ping
(Pings all connected systems).salt 'web*' cmd.run 'systemctl restart nginx'
(Restarts Nginx service on all web servers).
State File Example (YAML):
nginx: pkg.installed: [] service.running: - enable: True - require: - pkg: nginx
Comparing SALT to Other Tools
Feature
Salt
Ansible
Puppet
Chef
Language
YAML + Python
YAML + Jinja
Puppet DSL
Ruby DSL
Agent Required
Optional
No
Yes
Yes
Push/Pull
Both
Push
Pull
Pull
Speed
Very Fast
Medium
Medium
Medium
Scalability
High
Medium-High
Medium
Medium
Event-Driven
Yes
No
No
Limited
Security Considerations
SALT ensures secure communication and authentication:
Authentication: Uses public/private key pairs to authenticate minions.
Encryption: Communicates via ZeroMQ encrypted with AES.
Access Control: Defines granular controls using Access Control Lists (ACLs) in the Salt Master configuration.
Additional Information
For organizations seeking enhanced usability, SaltStack Config offers a graphical interface to streamline workflow management. Additionally, SALT's integration with VMware Tanzu provides advanced automation for enterprise systems.
Installation Example
On a master node (e.g., RedHat):
sudo yum install salt-master
On minion nodes:
sudo yum install salt-minion
Configure
/etc/salt/minion
with:master: your-master-hostname
Then start the minion:
sudo systemctl enable --now salt-minion
Accept the minion on the master:
sudo salt-key -L # list all keys sudo salt-key -A # accept all pending minion keys
Where to Go Next
Git-based states with gitfs
Masterless setups for container deployments
Custom modules in Python
Event-driven orchestration with beacons + reactors
Large 600+ Server Patching in 3 Regions with 3 different Environments Example
Let give an example of have 3 different environments DEV (Development), PREP (Preproduction), and PROD (Production), now let's dig a little deeper and say we have 3 different regions EUS (East US), WUS (West US), and EUR (European) and we would like these patches to be applied on changing dates, such as DEV will be patched on 3 days after the second Tuesday, PREP will be patched on 5 days after the second Tuesday, and PROD will be 5 days after the 3rd Tuesday. The final clause to this mass configuration is, we would like the patches to be applied on the Client Local Time.
In many configurations such as AUM, or JetPatch, you would need several different Maintenace Schedules or plans to create this setup. With SALT, the configuration lies inside the minion, so configuration is much more defined, and simple to manage.
Use Case Recap
You want to patch three environment groups based on local time and specific schedules:
Environment
Schedule Rule
Timezone
Dev
3rd day after 2nd Tuesday of the month
Local
PREP
5th day after 2nd Tuesday of the month
Local
Prod
5th day after 3rd Tuesday of the month
Local
Each server knows its environment via Salt grains, and the local timezone via OS or
timedatectl
.Step-by-Step Plan
Set Custom Grains for Environment & Region
Create a Python script (run daily) that:
Checks if today matches the schedule per group
If yes, uses Salt to target minions with the correct grain and run patching
Schedule this script via cron or Salt scheduler
Use Salt States to define patching
Step 1: Define Custom Grains
On each minion, configure
/etc/salt/minion.d/env_grains.conf
:grains: environment: dev # or prep, prod region: us-east # or us-west, eu-central, etc.
Then restart the minion:
sudo systemctl restart salt-minion
Verify:
salt '*' grains.items
Step 2: Salt State for Patching
Create
patching/init.sls
:update-packages: pkg.uptodate: - refresh: True - retry: attempts: 3 interval: 15 reboot-if-needed: module.run: - name: system.reboot - onlyif: 'test -f /var/run/reboot-required'
Step 3: Python Script to Orchestrate Patching
Let’s build
run_patching.py
. It:Figures out the correct date for patching
Uses
salt
CLI to run patching for each groupHandles each group in its region and timezone
#!/usr/bin/env python3 import subprocess import datetime import pytz from dateutil.relativedelta import relativedelta, TU # Define your environments and their rules envs = { "dev": {"offset": 3, "week": 2}, "prep": {"offset": 5, "week": 2}, "prod": {"offset": 5, "week": 3} } # Map environments to regions (optional) regions = { "dev": ["us-east", "us-west"], "prep": ["us-east", "eu-central"], "prod": ["us-east", "us-west", "eu-central"] } # Timezones per region region_tz = { "us-east": "America/New_York", "us-west": "America/Los_Angeles", "eu-central": "Europe/Berlin" } def calculate_patch_date(year, month, week, offset): second_tuesday = datetime.date(year, month, 1) + relativedelta(weekday=TU(week)) return second_tuesday + datetime.timedelta(days=offset) def is_today_patch_day(env, region): now = datetime.datetime.now(pytz.timezone(region_tz[region])) target_day = calculate_patch_date(now.year, now.month, envs[env]["week"], envs[env]["offset"]) return now.date() == target_day and now.hour >= desired_hour def run_salt_target(environment, region): target = f"environment:{environment} and region:{region}" print(f"Patching {target}...") subprocess.run([ "salt", "-C", target, "state.apply", "patching" ]) def main(): for env in envs: for region in regions[env]: if is_today_patch_day(env, region): run_salt_target(env, region) if __name__ == "__main__": main()
Make it executable:
chmod +x /srv/scripts/run_patching.py
Test it:
./run_patching.py
Step 4: Schedule via Cron (on Master)
Edit crontab:
crontab -e
Add daily job:
# Run daily at 6 AM UTC 0 6 * * * /srv/scripts/run_patching.py >> /var/log/salt/patching.log 2>&1
This assumes the local time logic is handled in the script using each region’s timezone.
Security & Safety Tips
Test patching states on a few dev nodes first (
salt -G 'environment:dev' -l debug state.apply patching
)Add Slack/email notifications (Salt Reactor or Python
smtplib
)Consider dry-run support with
test=True
(inpkg.uptodate
)Use
salt-run jobs.list_jobs
to track job execution
Optional Enhancements
Use Salt Beacons + Reactors to monitor and patch in real-time
Integrate with JetPatch or Ansible for hybrid control
Add patch deferral logic for critical services
Write to a central patching log DB with job status per host
Overall Architecture
Minions:
Monitor the date/time via beacons
On patch day (based on local logic), send a custom event to the master
Master:
Reacts to that event via a reactor
Targets the sending minion and applies the
patching
state
Step-by-Step: Salt Beacon + Reactor Model
1. Define a Beacon on Each Minion
File:
/etc/salt/minion.d/patchday_beacon.conf
beacons: patchday: interval: 3600 # check every hour
This refers to a custom beacon we will define.
2. Create the Custom Beacon (on all minions)
File:
/srv/salt/_beacons/patchday.py
import datetime from dateutil.relativedelta import relativedelta, TU import pytz __virtualname__ = 'patchday' def beacon(config): ret = [] grains = __grains__ env = grains.get('environment', 'unknown') region = grains.get('region', 'unknown') # Define rules rules = { "dev": {"offset": 3, "week": 2}, "prep": {"offset": 5, "week": 2}, "prod": {"offset": 5, "week": 3} } region_tz = { "us-east": "America/New_York", "us-west": "America/Los_Angeles", "eu-central": "Europe/Berlin" } if env not in rules or region not in region_tz: return ret # invalid or missing config tz = pytz.timezone(region_tz[region]) now = datetime.datetime.now(tz) rule = rules[env] patch_day = (datetime.date(now.year, now.month, 1) + relativedelta(weekday=TU(rule["week"])) + datetime.timedelta(days=rule["offset"])) if now.date() == patch_day: ret.append({ "tag": "patch/ready", "env": env, "region": region, "datetime": now.isoformat() }) return ret
3. Sync Custom Beacon to Minions
On the master:
salt '*' saltutil.sync_beacons
Enable it:
salt '*' beacons.add patchday '{"interval": 3600}'
4. Define Reactor on the Master
File:
/etc/salt/master.d/reactor.conf
reactor: - 'patch/ready': - /srv/reactor/start_patch.sls
5. Create Reactor SLS File
File:
/srv/reactor/start_patch.sls
{% set minion_id = data['id'] %} run_patching: local.state.apply: - tgt: {{ minion_id }} - arg: - patching
This reacts to
patch/ready
event and applies thepatching
state to the calling minion.6. Testing the Full Flow
Restart the minion:
systemctl restart salt-minion
Confirm the beacon is registered:
salt '*' beacons.list
Trigger a manual test (simulate patch day by modifying date logic)
Watch events on master:
salt-run state.event pretty=True
Confirm patching applied:
salt '*' saltutil.running
7. Example:
patching/init.sls
Already shared, but here it is again for completeness:
update-packages: pkg.uptodate: - refresh: True - retry: attempts: 3 interval: 15 reboot-if-needed: module.run: - name: system.reboot - onlyif: 'test -f /var/run/reboot-required'
Benefits of This Model
Real-time and event-driven – no need for polling or external scripts
Timezone-aware, thanks to local beacon logic
Self-healing – minions signal readiness independently
Audit trail – each event is logged in Salt’s event bus
Extensible – you can easily add Slack/email alerts via additional reactors
Goal
Track patching event completions per minion
Store patch event metadata: who patched, when, result, OS, IP, environment, region, etc.
Generate readable reports in:
CSV/Excel
HTML dashboard
JSON for API or SIEM ingestion
Step 1: Customize Reactor to Log Completion
Let’s log each successful patch into a central log file or database (like SQLite or MariaDB).
Update Reactor:
/srv/reactor/start_patch.sls
Add a returner to store job status.
{% set minion_id = data['id'] %} run_patching: local.state.apply: - tgt: {{ minion_id }} - arg: - patching - kwarg: returner: local_json # You can also use 'mysql', 'elasticsearch', etc.
Configure Returner (e.g.,
local_json
)In
/etc/salt/master
:returner_dirs: - /srv/salt/returners ext_returners: local_json: file: /var/log/salt/patch_report.json
Or use a MySQL returner:
mysql.host: 'localhost' mysql.user: 'salt' mysql.pass: 'yourpassword' mysql.db: 'salt' mysql.port: 3306
Enable returners:
salt-run saltutil.sync_returners
Step 2: Normalize Patch Data (Optional Post-Processor)
If using JSON log, create a post-processing script to build reports:
process_patch_log.py
import json import csv from datetime import datetime def load_events(log_file): with open(log_file, 'r') as f: return [json.loads(line) for line in f if line.strip()] def export_csv(events, out_file): with open(out_file, 'w', newline='') as f: writer = csv.DictWriter(f, fieldnames=[ 'minion', 'date', 'environment', 'region', 'result' ]) writer.writeheader() for e in events: writer.writerow({ 'minion': e['id'], 'date': datetime.fromtimestamp(e['_stamp']).isoformat(), 'environment': e['return'].get('grains', {}).get('environment', 'unknown'), 'region': e['return'].get('grains', {}).get('region', 'unknown'), 'result': 'success' if e['success'] else 'failure' }) events = load_events('/var/log/salt/patch_report.json') export_csv(events, '/srv/reports/patching_report.csv')
Step 3: Build a Simple Web Dashboard
If you want to display reports via a browser:
🛠 Tools:
Flask or FastAPI
Bootstrap or Chart.js
Reads JSON/CSV and renders:
Example Chart Dashboard Features:
✅ Last patch date per server
📍 Patching success rate per region/env
🔴 Highlight failed patching
📆 Monthly compliance timeline
Would you like a working example of that Flask dashboard? I can include the full codebase if so.
Step 4: Send Reports via Email (Optional)
🐍 Python:
send_report_email.py
import smtplib from email.message import EmailMessage msg = EmailMessage() msg["Subject"] = "Monthly Patch Report" msg["From"] = "patchbot@example.com" msg["To"] = "it-lead@example.com" msg.set_content("Attached is the patch compliance report.") with open("/srv/reports/patching_report.csv", "rb") as f: msg.add_attachment(f.read(), maintype="text", subtype="csv", filename="patching_report.csv") with smtplib.SMTP("localhost") as s: s.send_message(msg)
Schedule that weekly or monthly with
cron
.Flask Dashboard (Patch Reporting)
app.py
from flask import Flask, render_template import csv from collections import defaultdict app = Flask(__name__) @app.route('/') def index(): results = [] success_count = defaultdict(int) fail_count = defaultdict(int) with open('/srv/reports/patching_report.csv', 'r') as f: reader = csv.DictReader(f) for row in reader: results.append(row) key = f"{row['environment']} - {row['region']}" if row['result'] == 'success': success_count[key] += 1 else: fail_count[key] += 1 summary = [ {"group": k, "success": success_count[k], "fail": fail_count[k]} for k in sorted(set(success_count) | set(fail_count)) ] return render_template('dashboard.html', results=results, summary=summary) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000)
templates/dashboard.html
<!DOCTYPE html> <html> <head> <title>Patch Compliance Dashboard</title> <style> body { font-family: Arial; padding: 20px; } table { border-collapse: collapse; width: 100%; margin-bottom: 30px; } th, td { border: 1px solid #ccc; padding: 8px; text-align: left; } th { background-color: #f4f4f4; } .fail { background-color: #fdd; } .success { background-color: #dfd; } </style> </head> <body> <h1>Patch Compliance Dashboard</h1> <h2>Summary</h2> <table> <tr><th>Group</th><th>Success</th><th>Failure</th></tr> {% for row in summary %} <tr> <td>{{ row.group }}</td> <td>{{ row.success }}</td> <td>{{ row.fail }}</td> </tr> {% endfor %} </table> <h2>Detailed Results</h2> <table> <tr><th>Minion</th><th>Date</th><th>Environment</th><th>Region</th><th>Result</th></tr> {% for row in results %} <tr class="{{ row.result }}"> <td>{{ row.minion }}</td> <td>{{ row.date }}</td> <td>{{ row.environment }}</td> <td>{{ row.region }}</td> <td>{{ row.result }}</td> </tr> {% endfor %} </table> </body> </html>
How to Use
pip install flask python app.py
Then visit
http://localhost:5000
or your server’s IP at port 5000.Optional: SIEM/Event Forwarding
If you use Elasticsearch, Splunk, or Mezmo:
Use a returner like
es_return
,splunk_return
, or send via custom script using REST API.Normalize fields: hostname, env, os, patch time, result
Filter dashboards by compliance groupings
TL;DR: Reporting Components Checklist
Component
Purpose
Tool
JSON/DB logging
Track patch status
Returners
Post-processing script
Normalize data for business
Python
CSV/Excel export
Shareable report format
Python
csv
moduleHTML dashboard
Visualize trends/compliance
Flask, Chart.js, Bootstrap
Email automation
Notify stakeholders
smtplib
, cronSIEM/Splunk integration
Enterprise log ingestion
REST API or native returners
Recent Entries
-
by: Abhishek Prakash
Fri, 04 Jul 2025 17:30:52 +0530Is it too 'AWKward' to use AWK in the age of AI? I don't think so. AWK is so underrated despite being so powerful for creating useful automation scripts.
We have had a very good intro to AWK and now I am working on a series that covers the basics of AWK, just like our Bash series.
Hopefully, you'll see it in the next newsletter. Stay tuned 😊
Recent Entries
-
No blog entries yet