SaltStack (SALT): A Comprehensive Overview
SaltStack, commonly referred to as SALT, is a powerful open-source infrastructure management platform designed for scalability. Leveraging event-driven workflows, SALT provides an adaptable solution for automating configuration management, remote execution, and orchestration across diverse infrastructures.
This document offers an in-depth guide to SALT for both technical teams and business stakeholders, demystifying its features and applications.
What is SALT?
SALT is a versatile tool that serves multiple purposes in infrastructure management:
Configuration Management Tool (like Ansible, Puppet, Chef): Automates the setup and maintenance of servers and applications.
Remote Execution Engine (similar to Fabric or SSH): Executes commands on systems remotely, whether targeting a single node or thousands.
State Enforcement System: Ensures systems maintain desired configurations over time.
Event-Driven Automation Platform: Detects system changes and triggers actions in real-time.
Key Technologies:
YAML: Used for defining states and configurations in a human-readable format.
Jinja: Enables dynamic templating for YAML files.
Python: Provides extensibility through custom modules and scripts.
Supported Architectures
SALT accommodates various architectures to suit organizational needs:
Master/Minion: A centralized control model where a Salt Master manages Salt Minions to send commands and execute tasks.
Masterless: A decentralized approach using salt-ssh
to execute tasks locally without requiring a master node.
Core Components of SALT
Component | Description |
---|
Salt Master | Central control node that manages minions, sends commands, and orchestrates infrastructure tasks. |
Salt Minion | Agent installed on managed nodes that executes commands from the master. |
Salt States | Declarative YAML configuration files that define desired system states (e.g., package installations). |
Grains | Static metadata about a system (e.g., OS version, IP address), useful for targeting specific nodes. |
Pillars | Secure, per-minion data storage for secrets and configuration details. |
Runners | Python modules executed on the master to perform complex orchestration tasks. |
Reactors | Event listeners that trigger actions in response to system events. |
Beacons | Minion-side watchers that emit events based on system changes (e.g., file changes or CPU spikes). |
Key Features of SALT
Feature | Description |
---|
Agent or Agentless | SALT can operate in agent (minion-based) or agentless (masterless) mode. |
Scalability | Capable of managing tens of thousands of nodes efficiently. |
Event-Driven | Reacts to real-time system changes via beacons and reactors, enabling automation at scale. |
Python Extensibility | Developers can extend modules or create custom ones using Python. |
Secure | Employs ZeroMQ for communication and AES encryption for data security. |
Role-Based Config | Dynamically applies configurations based on server roles using grains metadata. |
Granular Targeting | Targets systems using name, grains, regex, or compound filters for precise management. |
Common Use Cases
SALT is widely used across industries for tasks like:
Provisioning new systems and applying base configurations.
Enforcing security policies and managing firewall rules.
Installing and enabling software packages (e.g., HTTPD, Nginx).
Scheduling and automating patching across multiple environments.
Monitoring logs and system states with automatic remediation for issues.
Managing VM and container lifecycles (e.g., Docker, LXC).
Real-World Examples
Remote Command Execution:
State File Example (YAML):
nginx:
pkg.installed: []
service.running:
- enable: True
- require:
- pkg: nginx
Comparing SALT to Other Tools
Feature | Salt | Ansible | Puppet | Chef |
---|
Language | YAML + Python | YAML + Jinja | Puppet DSL | Ruby DSL |
Agent Required | Optional | No | Yes | Yes |
Push/Pull | Both | Push | Pull | Pull |
Speed | Very Fast | Medium | Medium | Medium |
Scalability | High | Medium-High | Medium | Medium |
Event-Driven | Yes | No | No | Limited |
Security Considerations
SALT ensures secure communication and authentication:
Authentication: Uses public/private key pairs to authenticate minions.
Encryption: Communicates via ZeroMQ encrypted with AES.
Access Control: Defines granular controls using Access Control Lists (ACLs) in the Salt Master configuration.
Additional Information
For organizations seeking enhanced usability, SaltStack Config offers a graphical interface to streamline workflow management. Additionally, SALT's integration with VMware Tanzu provides advanced automation for enterprise systems.
Installation Example
On a master node (e.g., RedHat):
sudo yum install salt-master
On minion nodes:
sudo yum install salt-minion
Configure /etc/salt/minion
with:
master: your-master-hostname
Then start the minion:
sudo systemctl enable --now salt-minion
Accept the minion on the master:
sudo salt-key -L # list all keys
sudo salt-key -A # accept all pending minion keys
Where to Go Next
Salt Project Docs
Git-based states with gitfs
Masterless setups for container deployments
Custom modules in Python
Event-driven orchestration with beacons + reactors
Large 600+ Server Patching in 3 Regions with 3 different Environments Example
Let give an example of have 3 different environments DEV (Development), PREP (Preproduction), and PROD (Production), now let's dig a little deeper and say we have 3 different regions EUS (East US), WUS (West US), and EUR (European) and we would like these patches to be applied on changing dates, such as DEV will be patched on 3 days after the second Tuesday, PREP will be patched on 5 days after the second Tuesday, and PROD will be 5 days after the 3rd Tuesday. The final clause to this mass configuration is, we would like the patches to be applied on the Client Local Time.
In many configurations such as AUM, or JetPatch, you would need several different Maintenace Schedules or plans to create this setup. With SALT, the configuration lies inside the minion, so configuration is much more defined, and simple to manage.
Use Case Recap
You want to patch three environment groups based on local time and specific schedules:
Environment | Schedule Rule | Timezone |
---|
Dev | 3rd day after 2nd Tuesday of the month | Local |
PREP | 5th day after 2nd Tuesday of the month | Local |
Prod | 5th day after 3rd Tuesday of the month | Local |
Each server knows its environment via Salt grains, and the local timezone via OS or timedatectl
.
Step-by-Step Plan
Set Custom Grains for Environment & Region
Create a Python script (run daily) that:
Checks if today matches the schedule per group
If yes, uses Salt to target minions with the correct grain and run patching
Schedule this script via cron or Salt scheduler
Use Salt States to define patching
Step 1: Define Custom Grains
On each minion, configure /etc/salt/minion.d/env_grains.conf
:
grains:
environment: dev # or prep, prod
region: us-east # or us-west, eu-central, etc.
Then restart the minion:
sudo systemctl restart salt-minion
Verify:
salt '*' grains.items
Step 2: Salt State for Patching
Create patching/init.sls
:
update-packages:
pkg.uptodate:
- refresh: True
- retry:
attempts: 3
interval: 15
reboot-if-needed:
module.run:
- name: system.reboot
- onlyif: 'test -f /var/run/reboot-required'
Step 3: Python Script to Orchestrate Patching
Let’s build run_patching.py
. It:
Figures out the correct date for patching
Uses salt
CLI to run patching for each group
Handles each group in its region and timezone
#!/usr/bin/env python3
import subprocess
import datetime
import pytz
from dateutil.relativedelta import relativedelta, TU
# Define your environments and their rules
envs = {
"dev": {"offset": 3, "week": 2},
"prep": {"offset": 5, "week": 2},
"prod": {"offset": 5, "week": 3}
}
# Map environments to regions (optional)
regions = {
"dev": ["us-east", "us-west"],
"prep": ["us-east", "eu-central"],
"prod": ["us-east", "us-west", "eu-central"]
}
# Timezones per region
region_tz = {
"us-east": "America/New_York",
"us-west": "America/Los_Angeles",
"eu-central": "Europe/Berlin"
}
def calculate_patch_date(year, month, week, offset):
second_tuesday = datetime.date(year, month, 1) + relativedelta(weekday=TU(week))
return second_tuesday + datetime.timedelta(days=offset)
def is_today_patch_day(env, region):
now = datetime.datetime.now(pytz.timezone(region_tz[region]))
target_day = calculate_patch_date(now.year, now.month, envs[env]["week"], envs[env]["offset"])
return now.date() == target_day and now.hour >= desired_hour
def run_salt_target(environment, region):
target = f"environment:{environment} and region:{region}"
print(f"Patching {target}...")
subprocess.run([
"salt", "-C", target, "state.apply", "patching"
])
def main():
for env in envs:
for region in regions[env]:
if is_today_patch_day(env, region):
run_salt_target(env, region)
if __name__ == "__main__":
main()
Make it executable:
chmod +x /srv/scripts/run_patching.py
Test it:
./run_patching.py
Step 4: Schedule via Cron (on Master)
Edit crontab:
crontab -e
Add daily job:
# Run daily at 6 AM UTC
0 6 * * * /srv/scripts/run_patching.py >> /var/log/salt/patching.log 2>&1
This assumes the local time logic is handled in the script using each region’s timezone.
Security & Safety Tips
Test patching states on a few dev nodes first (salt -G 'environment:dev' -l debug state.apply patching
)
Add Slack/email notifications (Salt Reactor or Python smtplib
)
Consider dry-run support with test=True
(in pkg.uptodate
)
Use salt-run jobs.list_jobs
to track job execution
Optional Enhancements
Use Salt Beacons + Reactors to monitor and patch in real-time
Integrate with JetPatch or Ansible for hybrid control
Add patch deferral logic for critical services
Write to a central patching log DB with job status per host
Overall Architecture
Minions:
Monitor the date/time via beacons
On patch day (based on local logic), send a custom event to the master
Master:
Step-by-Step: Salt Beacon + Reactor Model
1. Define a Beacon on Each Minion
File: /etc/salt/minion.d/patchday_beacon.conf
beacons:
patchday:
interval: 3600 # check every hour
This refers to a custom beacon we will define.
2. Create the Custom Beacon (on all minions)
File: /srv/salt/_beacons/patchday.py
import datetime
from dateutil.relativedelta import relativedelta, TU
import pytz
__virtualname__ = 'patchday'
def beacon(config):
ret = []
grains = __grains__
env = grains.get('environment', 'unknown')
region = grains.get('region', 'unknown')
# Define rules
rules = {
"dev": {"offset": 3, "week": 2},
"prep": {"offset": 5, "week": 2},
"prod": {"offset": 5, "week": 3}
}
region_tz = {
"us-east": "America/New_York",
"us-west": "America/Los_Angeles",
"eu-central": "Europe/Berlin"
}
if env not in rules or region not in region_tz:
return ret # invalid or missing config
tz = pytz.timezone(region_tz[region])
now = datetime.datetime.now(tz)
rule = rules[env]
patch_day = (datetime.date(now.year, now.month, 1)
+ relativedelta(weekday=TU(rule["week"]))
+ datetime.timedelta(days=rule["offset"]))
if now.date() == patch_day:
ret.append({
"tag": "patch/ready",
"env": env,
"region": region,
"datetime": now.isoformat()
})
return ret
3. Sync Custom Beacon to Minions
On the master:
salt '*' saltutil.sync_beacons
Enable it:
salt '*' beacons.add patchday '{"interval": 3600}'
4. Define Reactor on the Master
File: /etc/salt/master.d/reactor.conf
reactor:
- 'patch/ready':
- /srv/reactor/start_patch.sls
5. Create Reactor SLS File
File: /srv/reactor/start_patch.sls
{% set minion_id = data['id'] %}
run_patching:
local.state.apply:
- tgt: {{ minion_id }}
- arg:
- patching
This reacts to patch/ready
event and applies the patching
state to the calling minion.
6. Testing the Full Flow
Restart the minion: systemctl restart salt-minion
Confirm the beacon is registered: salt '*' beacons.list
Trigger a manual test (simulate patch day by modifying date logic)
Watch events on master:
salt-run state.event pretty=True
Confirm patching applied:
salt '*' saltutil.running
7. Example: patching/init.sls
Already shared, but here it is again for completeness:
update-packages:
pkg.uptodate:
- refresh: True
- retry:
attempts: 3
interval: 15
reboot-if-needed:
module.run:
- name: system.reboot
- onlyif: 'test -f /var/run/reboot-required'
Benefits of This Model
Real-time and event-driven – no need for polling or external scripts
Timezone-aware, thanks to local beacon logic
Self-healing – minions signal readiness independently
Audit trail – each event is logged in Salt’s event bus
Extensible – you can easily add Slack/email alerts via additional reactors
Goal
Track patching event completions per minion
Store patch event metadata: who patched, when, result, OS, IP, environment, region, etc.
Generate readable reports in:
Step 1: Customize Reactor to Log Completion
Let’s log each successful patch into a central log file or database (like SQLite or MariaDB).
Update Reactor: /srv/reactor/start_patch.sls
Add a returner to store job status.
{% set minion_id = data['id'] %}
run_patching:
local.state.apply:
- tgt: {{ minion_id }}
- arg:
- patching
- kwarg:
returner: local_json # You can also use 'mysql', 'elasticsearch', etc.
Configure Returner (e.g., local_json
)
In /etc/salt/master
:
returner_dirs:
- /srv/salt/returners
ext_returners:
local_json:
file: /var/log/salt/patch_report.json
Or use a MySQL returner:
mysql.host: 'localhost'
mysql.user: 'salt'
mysql.pass: 'yourpassword'
mysql.db: 'salt'
mysql.port: 3306
Enable returners:
salt-run saltutil.sync_returners
Step 2: Normalize Patch Data (Optional Post-Processor)
If using JSON log, create a post-processing script to build reports:
process_patch_log.py
import json
import csv
from datetime import datetime
def load_events(log_file):
with open(log_file, 'r') as f:
return [json.loads(line) for line in f if line.strip()]
def export_csv(events, out_file):
with open(out_file, 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=[
'minion', 'date', 'environment', 'region', 'result'
])
writer.writeheader()
for e in events:
writer.writerow({
'minion': e['id'],
'date': datetime.fromtimestamp(e['_stamp']).isoformat(),
'environment': e['return'].get('grains', {}).get('environment', 'unknown'),
'region': e['return'].get('grains', {}).get('region', 'unknown'),
'result': 'success' if e['success'] else 'failure'
})
events = load_events('/var/log/salt/patch_report.json')
export_csv(events, '/srv/reports/patching_report.csv')
Step 3: Build a Simple Web Dashboard
If you want to display reports via a browser:
🛠 Tools:
Example Chart Dashboard Features:
✅ Last patch date per server
📍 Patching success rate per region/env
🔴 Highlight failed patching
📆 Monthly compliance timeline
Would you like a working example of that Flask dashboard? I can include the full codebase if so.
Step 4: Send Reports via Email (Optional)
🐍 Python: send_report_email.py
import smtplib
from email.message import EmailMessage
msg = EmailMessage()
msg["Subject"] = "Monthly Patch Report"
msg["From"] = "patchbot@example.com"
msg["To"] = "it-lead@example.com"
msg.set_content("Attached is the patch compliance report.")
with open("/srv/reports/patching_report.csv", "rb") as f:
msg.add_attachment(f.read(), maintype="text", subtype="csv", filename="patching_report.csv")
with smtplib.SMTP("localhost") as s:
s.send_message(msg)
Schedule that weekly or monthly with cron
.
Flask Dashboard (Patch Reporting)
app.py
from flask import Flask, render_template
import csv
from collections import defaultdict
app = Flask(__name__)
@app.route('/')
def index():
results = []
success_count = defaultdict(int)
fail_count = defaultdict(int)
with open('/srv/reports/patching_report.csv', 'r') as f:
reader = csv.DictReader(f)
for row in reader:
results.append(row)
key = f"{row['environment']} - {row['region']}"
if row['result'] == 'success':
success_count[key] += 1
else:
fail_count[key] += 1
summary = [
{"group": k, "success": success_count[k], "fail": fail_count[k]}
for k in sorted(set(success_count) | set(fail_count))
]
return render_template('dashboard.html', results=results, summary=summary)
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
templates/dashboard.html
<!DOCTYPE html>
<html>
<head>
<title>Patch Compliance Dashboard</title>
<style>
body { font-family: Arial; padding: 20px; }
table { border-collapse: collapse; width: 100%; margin-bottom: 30px; }
th, td { border: 1px solid #ccc; padding: 8px; text-align: left; }
th { background-color: #f4f4f4; }
.fail { background-color: #fdd; }
.success { background-color: #dfd; }
</style>
</head>
<body>
<h1>Patch Compliance Dashboard</h1>
<h2>Summary</h2>
<table>
<tr><th>Group</th><th>Success</th><th>Failure</th></tr>
{% for row in summary %}
<tr>
<td>{{ row.group }}</td>
<td>{{ row.success }}</td>
<td>{{ row.fail }}</td>
</tr>
{% endfor %}
</table>
<h2>Detailed Results</h2>
<table>
<tr><th>Minion</th><th>Date</th><th>Environment</th><th>Region</th><th>Result</th></tr>
{% for row in results %}
<tr class="{{ row.result }}">
<td>{{ row.minion }}</td>
<td>{{ row.date }}</td>
<td>{{ row.environment }}</td>
<td>{{ row.region }}</td>
<td>{{ row.result }}</td>
</tr>
{% endfor %}
</table>
</body>
</html>
How to Use
pip install flask
python app.py
Then visit http://localhost:5000
or your server’s IP at port 5000.
Optional: SIEM/Event Forwarding
If you use Elasticsearch, Splunk, or Mezmo:
Use a returner like es_return
, splunk_return
, or send via custom script using REST API.
Normalize fields: hostname, env, os, patch time, result
Filter dashboards by compliance groupings
TL;DR: Reporting Components Checklist
Component | Purpose | Tool |
---|
JSON/DB logging | Track patch status | Returners |
Post-processing script | Normalize data for business | Python |
CSV/Excel export | Shareable report format | Python csv module |
HTML dashboard | Visualize trends/compliance | Flask, Chart.js, Bootstrap |
Email automation | Notify stakeholders | smtplib , cron
|
SIEM/Splunk integration | Enterprise log ingestion | REST API or native returners |