Jump to content

Jessica Brown

Administrators
  • Joined

  • Last visited

Everything posted by Jessica Brown

  1. You are reading Part 34 of the 57-part series: Harden and Secure Linux Servers. [Level 4] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Data Loss Prevention (DLP) ensures that sensitive data remains protected from unauthorized access, accidental leaks, and intentional breaches. DLP is essential for: ✅ Preventing unauthorized file modifications and access. ✅ Protecting confidential information from data exfiltration. ✅ Ensuring compliance with security standards (GDPR, HIPAA, PCI-DSS). By implementing DLP measures, you reduce the risk of data breaches and ensure data integrity. How to Implement Data Loss Prevention in Linux1. Track Sensitive Data Changes with File Integrity Monitoring (FIM)Use AIDE (Advanced Intrusion Detection Environment) to monitor file modifications in critical directories. Install and Configure AIDEInstall AIDE (Debian/Ubuntu): sudo apt install aide -y For CentOS/RHEL: sudo yum install aide -y Initialize the AIDE database: sudo aideinit sudo mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz Configure AIDE to monitor sensitive directories: sudo nano /etc/aide/aide.conf Add directories to monitor: /etc/ p /home/user/Documents/ p /var/log/ p (Monitors /etc/, /home/user/Documents/, and /var/log/ for unauthorized changes.) Schedule regular integrity checks: sudo crontab -e Add a scheduled task (runs daily at 3 AM): 0 3 * * * /usr/bin/aide --check | mail -s "AIDE Integrity Report" admin@example.com 2. Encrypt Sensitive Data with GPG or OpenSSLEncryption ensures that even if data is accessed, it remains unreadable without the correct decryption key. Encrypt Files Using GPG (GnuPG)Encrypt a file with a passphrase: gpg --symmetric --cipher-algo AES256 confidential.txt Decrypt the file when needed: gpg --output confidential.txt --decrypt confidential.txt.gpg Encrypt Data with OpenSSLEncrypt a file with AES-256: openssl enc -aes-256-cbc -salt -in confidential.txt -out confidential.enc Decrypt the file: openssl enc -aes-256-cbc -d -in confidential.enc -out confidential.txt 3. Set Strict File Permissions for Sensitive DataRestrict access to sensitive files to authorized users only. Restrict File Access Using chmodSet strict permissions (owner-only access): sudo chmod 600 /home/user/confidential.txt Restrict entire directories: sudo chmod -R 700 /home/user/private/ (Only the owner can access this directory.) Use Access Control Lists (ACLs) for Fine-Grained AccessGrant read-only access to a specific user: sudo setfacl -m u:username:r /home/user/confidential.txt Verify ACL settings: getfacl /home/user/confidential.txt 4. Prevent Unauthorized Data TransfersBlock USB Storage Devices (Prevent Data Exfiltration)Disable USB mass storage module: echo "blacklist usb-storage" | sudo tee -a /etc/modprobe.d/blacklist.conf Reload system modules: sudo modprobe -r usb-storage Restrict Data Transfers with iptablesBlock unauthorized outbound traffic: sudo iptables -A OUTPUT -p tcp --dport 21 -j DROP # Blocks FTP sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP # Blocks SSH File Transfers sudo iptables -A OUTPUT -p tcp --dport 80 -j DROP # Blocks HTTP Uploads Allow only trusted IPs to transfer data: sudo iptables -A OUTPUT -p tcp -d trusted_ip --dport 22 -j ACCEPT 5. Monitor and Log Access to Sensitive FilesEnable Auditd to Log File AccessInstall auditd (Debian/Ubuntu): sudo apt install auditd -y (Already installed by default on CentOS/RHEL.) Monitor a specific file for access attempts: sudo auditctl -w /home/user/confidential.txt -p war -k sensitive_file_access Check audit logs for file access: sudo ausearch -k sensitive_file_access --start today Best Practices for Data Loss Prevention (DLP)✅ Regularly back up encrypted copies of sensitive data (rsync -avz /secure_data /backup). ✅ Monitor logs (/var/log/auth.log) for unauthorized access attempts. ✅ Use Intrusion Detection Systems (IDS) like OSSEC or Wazuh to alert on unauthorized file changes. ✅ Restrict sensitive data access to specific users and groups. By implementing Data Loss Prevention (DLP) measures, you secure critical information, prevent unauthorized access, and ensure compliance with data protection regulations, keeping your Linux server safe from data breaches.
  2. You are reading Part 33 of the 57-part series: Harden and Secure Linux Servers. [Level 4] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Regular vulnerability scans help identify and remediate security flaws before attackers exploit them. These scans: ✅ Detect outdated software, misconfigurations, and security weaknesses. ✅ Help ensure compliance with security frameworks (PCI-DSS, HIPAA, ISO 27001). ✅ Provide proactive defense by addressing vulnerabilities before they become threats. By running scheduled scans, you reduce attack surfaces and strengthen system security. How to Perform Vulnerability Scans in Linux1. Install OpenVAS (Open Vulnerability Assessment System)OpenVAS is an open-source vulnerability scanner that detects known security flaws. Install OpenVAS on Debian/Ubuntusudo apt update && sudo apt install openvas -y For CentOS/RHEL, install from source using Greenbone Security tools. Start OpenVAS Servicessudo systemctl start openvas-scanner sudo systemctl enable openvas-scanner Run OpenVAS Initial Setupsudo greenbone-feed-sync (This updates vulnerability definitions.) Access OpenVAS Web InterfaceOpen a web browser and go to: https://your-server-ip:9392 Log in with default credentials (admin/admin after setup). Start a scan by selecting New Task → Full Scan on your server's IP. Review results and address vulnerabilities. 2. Install and Use Nessus for Advanced Vulnerability ScanningNessus is a powerful enterprise-grade vulnerability scanner that offers detailed security assessments. Download and Install NessusFor Debian/Ubuntu: wget https://www.tenable.com/downloads/api/v1/public/pages/nessus/downloads/14704/download?i_agree_to_tenable_license_agreement=true -O Nessus.deb sudo dpkg -i Nessus.deb For CentOS/RHEL: wget https://www.tenable.com/downloads/api/v1/public/pages/nessus/downloads/14706/download?i_agree_to_tenable_license_agreement=true -O Nessus.rpm sudo rpm -ivh Nessus.rpm Start the Nessus Servicesudo systemctl start nessusd sudo systemctl enable nessusd Access the Nessus Web InterfaceOpen a web browser and go to: https://your-server-ip:8834 Create an account and select Nessus Essentials (free) or Nessus Professional. Update plugins and start a new scan to analyze system vulnerabilities. 3. Automate Weekly Vulnerability ScansSchedule a weekly scan using OpenVAS or Nessus with a cron job: sudo crontab -e Add the following line to run OpenVAS weekly at 2 AM on Sundays: 0 2 * * 0 openvas-scan-command (Replace openvas-scan-command with the actual command from OpenVAS API or CLI.) For Nessus scans, use: /opt/nessus/bin/nessuscli scan run --target=your-server-ip 4. Review and Address VulnerabilitiesAfter each scan: ✅ Review security reports and identify critical vulnerabilities. ✅ Apply software patches and security updates (sudo apt update && sudo apt upgrade -y). ✅ Restrict unnecessary services and ports (use sudo ufw status or sudo ss -tuln). ✅ Monitor logs and intrusion attempts (sudo cat /var/log/auth.log | grep failed). Best Practices for Vulnerability Management🔹 Run scans at least once a month or after major system updates. 🔹 Use a combination of tools (OpenVAS, Nessus, and Nmap) for comprehensive security checks. 🔹 Fix high-risk vulnerabilities immediately to prevent exploitation. 🔹 Monitor and log scan results to track security improvements over time. By conducting regular vulnerability scans, you proactively detect and fix security weaknesses, reducing the risk of breaches and strengthening your Linux server’s security posture.
  3. You are reading Part 32 of the 57-part series: Harden and Secure Linux Servers. [Level 4] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Two-Factor Authentication (2FA) adds an extra layer of security to SSH logins, requiring a second verification step (such as a push notification, SMS code, or phone call) before granting access. This significantly reduces the risk of: ✅ Unauthorized logins from stolen credentials. ✅ Brute-force SSH attacks. ✅ Privilege escalation by attackers. Using Duo Security’s PAM module, you can implement strong 2FA authentication for SSH logins on your Linux server. How to Set Up Duo 2FA for SSH Logins1. Install the Duo PAM ModuleFor Debian/Ubuntu: sudo apt install libpam-duo -y For CentOS/RHEL: sudo yum install duo_unix -y 2. Create a Duo Security Account & Get API CredentialsSign up for Duo Security: Go to Duo Admin Panel. Register your server and generate the following API credentials: Integration Key Secret Key API Hostname Copy these credentials, as they will be used in the configuration. 3. Configure Duo PAM for SSH AuthenticationEdit the Duo configuration file: sudo nano /etc/duo/pam_duo.conf Add the following settings (replace with your Duo API credentials): [duo] ikey = YOUR_INTEGRATION_KEY skey = YOUR_SECRET_KEY host = YOUR_API_HOSTNAME pushinfo = yes autopush = yes failmode = secure ✅ ikey → Your Duo Integration Key ✅ skey → Your Duo Secret Key ✅ host → Your Duo API Hostname ✅ autopush = yes → Automatically send a push notification ✅ failmode = secure → If Duo is unavailable, deny access (set to safe to allow login if Duo fails) Save and exit the file. 4. Enable Duo Authentication in SSHEdit the PAM SSH configuration file: sudo nano /etc/pam.d/sshd At the top of the file, add: auth required pam_duo.so (This ensures SSH logins require Duo authentication.) 5. Modify SSH Configuration to Use DuoEdit the SSH daemon configuration: sudo nano /etc/ssh/sshd_config Ensure the following settings are set: UsePAM yes ChallengeResponseAuthentication yes AuthenticationMethods publickey,keyboard-interactive Save and exit the file. 6. Restart SSH to Apply Changessudo systemctl restart sshd 7. Test Duo 2FA for SSH LoginsFrom another terminal, try logging into your server: ssh username@your_server_ip Enter your password (if using password authentication). Receive a Duo push notification, SMS code, or phone call. Approve the request in your Duo Mobile app. If successful, you will be logged in. Additional Enhancements for Duo 2FA on SSH✅ Require Duo for sudo commands (Optional): sudo nano /etc/pam.d/sudo Add: auth required pam_duo.so (Users must verify 2FA before running sudo commands.) ✅ Whitelist Trusted IPs to Skip 2FA: If you want to allow specific IP addresses to bypass 2FA, modify /etc/duo/pam_duo.conf: exempt_ip = 192.168.1.100/24 (Users from this subnet won't be prompted for 2FA.) ✅ Enforce Key-Based Authentication + 2FA for Maximum Security: Modify /etc/ssh/sshd_config: PasswordAuthentication no AuthenticationMethods publickey,keyboard-interactive This ensures only SSH keys + Duo 2FA are allowed for logins. Best Practices for Duo 2FA on SSH🔹 Always test 2FA setup before closing your session to prevent being locked out. 🔹 Require Duo 2FA for all privileged users (e.g., root, sudo users). 🔹 Monitor SSH login attempts using logs: sudo cat /var/log/auth.log | grep sshd 🔹 Combine 2FA with other security layers (firewalls, fail2ban, intrusion detection). By implementing Duo 2FA for SSH, you add a powerful security layer that prevents unauthorized access and protects against brute-force attacks, making your Linux server much more secure.
  4. You are reading Part 31 of the 57-part series: Harden and Secure Linux Servers. [Level 4] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Application sandboxing isolates applications from the rest of the system, preventing them from accessing sensitive files, making unauthorized system modifications, or exploiting vulnerabilities. This enhances security by: ✅ Preventing applications from accessing system-critical files. ✅ Reducing the risk of privilege escalation if an application is compromised. ✅ Limiting the damage caused by malware or rogue applications. By sandboxing applications, you minimize the risk that a vulnerability in one application will affect the entire system. How to Implement Application Sandboxing in Linux1. Use Firejail for Process SandboxingFirejail is a lightweight sandboxing tool that restricts applications using Linux namespaces and seccomp-bpf filters. Install Firejail (Debian/Ubuntu-based systems)sudo apt install firejail -y For CentOS/RHEL, install from source: sudo yum install epel-release -y sudo yum install firejail -y Run an Application in a Firejail SandboxTo launch a program in a restricted environment: firejail firefox (This isolates Firefox, preventing it from accessing system files outside its sandbox.) List Available Firejail Profiles (Pre-configured Security Rules)ls /etc/firejail/ (Pre-configured profiles exist for common applications like browsers, text editors, and media players.) Create a Custom Firejail Profile for an ApplicationTo define strict file access rules for a sandboxed application: Copy the default profile: sudo cp /etc/firejail/default.profile /etc/firejail/custom_app.profile Edit it: sudo nano /etc/firejail/custom_app.profile Restrict access to specific directories: noblacklist /home/user/Documents whitelist /home/user/sandbox Run the application with the custom profile: firejail --profile=/etc/firejail/custom_app.profile myapp 2. Use AppArmor for Mandatory Access Control (Ubuntu/Debian)AppArmor restricts what files, capabilities, and network access an application can use. Check if AppArmor is Enabledsudo apparmor_status Install AppArmor (If Not Installed)sudo apt install apparmor-utils -y Enable AppArmor for an ApplicationCreate a profile for an application (e.g., nginx): sudo nano /etc/apparmor.d/usr.sbin.nginx Define restricted access rules: /usr/sbin/nginx { include <abstractions/base> /var/www/html/** r, /etc/nginx/nginx.conf r, /var/log/nginx/** rw, } Load and enforce the profile: sudo apparmor_parser -a /etc/apparmor.d/usr.sbin.nginx 3. Use Flatpak or Snap for Secure Application SandboxingBoth Flatpak and Snap run applications in isolated containers with limited system access. Install Flatpak (Recommended for Desktop Apps)sudo apt install flatpak -y Run applications securely: flatpak run app_name Install Snap for Secure Application Deploymentsudo apt install snapd -y Run applications securely: snap install app_name Best Practices for Application Sandboxing✅ Sandbox high-risk applications like browsers, media players, and software with internet access. ✅ Regularly update sandboxing policies to stay ahead of security threats. ✅ Combine sandboxing with AppArmor, SELinux, or Firejail for maximum security. ✅ Monitor application behavior to ensure sandboxes are working (journalctl -xe | grep apparmor). By sandboxing applications, you restrict their access to system resources, reducing the risk of malware, exploits, and unauthorized modifications, keeping your Linux environment secure.
  5. You are reading Part 30 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. A Web Application Firewall (WAF) acts as a protective shield between users and your web application, filtering and blocking malicious traffic before it reaches your server. This is essential for preventing: ✅ SQL Injection – Attackers injecting malicious SQL queries. ✅ Cross-Site Scripting (XSS) – Malicious scripts executed in users' browsers. ✅ Cross-Site Request Forgery (CSRF) – Unauthorized actions performed on behalf of authenticated users. ✅ Malicious bots and DDoS attacks targeting your website. By implementing a WAF, you reduce attack risks and protect your web applications from common vulnerabilities. How to Set Up ModSecurity WAF for Apache or Nginx1. Install ModSecurity and OWASP Core Rule Set (CRS)For Apache (Debian/Ubuntu): sudo apt install libapache2-mod-security2 -y sudo apt install modsecurity-crs -y For Nginx (Debian/Ubuntu): sudo apt install libnginx-mod-security -y sudo apt install modsecurity-crs -y For CentOS/RHEL (Apache): sudo yum install mod_security -y 2. Enable ModSecurity on Your Web ServerFor Apache:Enable ModSecurity module: sudo a2enmod security2 Edit ModSecurity configuration file: sudo nano /etc/modsecurity/modsecurity.conf Change: SecRuleEngine DetectionOnly To: SecRuleEngine On Restart Apache to apply changes: sudo systemctl restart apache2 For Nginx:Edit the Nginx configuration file: sudo nano /etc/nginx/nginx.conf Add the ModSecurity directive under the http block: modsecurity on; modsecurity_rules_file /etc/nginx/modsec/main.conf; Restart Nginx to apply changes: sudo systemctl restart nginx 3. Configure the OWASP ModSecurity Core Rule Set (CRS)The OWASP CRS provides predefined WAF rules to protect against common attacks. Edit the OWASP CRS settings file: sudo nano /etc/modsecurity/crs/crs-setup.conf Enable stricter security rules: Set paranoia level (higher = stricter security): SecAction \ "id:900000, \ phase:1, \ nolog, \ pass, \ t:none, \ setvar:tx.paranoia_level=2" (Default is 1, but 2 increases security with minimal false positives.) Restart Apache or Nginx to apply: sudo systemctl restart apache2 # For Apache sudo systemctl restart nginx # For Nginx 4. Regularly Update WAF RulesTo stay protected against new threats, update ModSecurity and CRS rules frequently: ✅ For Debian/Ubuntu: sudo apt update && sudo apt upgrade -y ✅ For CentOS/RHEL: sudo yum update -y ✅ For Manual CRS Updates: cd /etc/modsecurity/crs/ sudo git pull sudo systemctl restart apache2 # Restart Apache 5. Test WAF ProtectionSimulate an SQL Injection Attack: Try accessing: http://yourwebsite.com/?id=1' OR '1'='1 (Your WAF should block the request.) Check WAF logs for blocked attacks: sudo cat /var/log/modsec_audit.log Alternative: Use Cloud-Based WAF SolutionsIf managing a self-hosted WAF is complex, consider cloud-based WAFs, which offer automatic rule updates and AI-driven protection. ✅ Cloudflare WAF → Easy to set up, protects against bots, DDoS, and web attacks. ✅ AWS WAF → Integrates with AWS services to filter malicious traffic. ✅ Sucuri WAF → Blocks attacks at the DNS level before they reach your server. Best Practices for WAF Security✅ Enable ModSecurity logging to monitor blocked threats. ✅ Fine-tune WAF rules to balance security and usability (avoid false positives). ✅ Combine WAF with Fail2Ban for extra protection against brute-force attacks. ✅ Regularly test your web application for vulnerabilities (e.g., with OWASP ZAP). By deploying a Web Application Firewall (WAF), you protect your website from hackers, prevent data breaches, and secure web applications against modern cyber threats.
  6. You are reading Part 29 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Configuration drift occurs when system settings, security policies, or application configurations deviate from their intended secure baseline. These deviations can lead to: ✅ Security vulnerabilities – Unintended changes may introduce exploitable weaknesses. ✅ Compliance failures – Systems may no longer align with security policies (e.g., PCI-DSS, HIPAA). ✅ Operational issues – Unauthorized changes can cause system instability or failures. By automating configuration monitoring, you ensure your servers remain secure and compliant over time. How to Monitor and Prevent Configuration Drift1. Use Configuration Management Tools to Enforce BaselinesTools like Ansible, Puppet, and Chef allow you to define and maintain a secure configuration state. ✅ Ansible Example – Enforcing SSH Security Policies: Install Ansible: sudo apt install ansible -y # Debian/Ubuntu sudo yum install ansible -y # CentOS/RHELCreate a Playbook to Enforce SSH Security: - name: Enforce SSH Security hosts: all tasks: - name: Disable root login lineinfile: path: /etc/ssh/sshd_config regexp: '^PermitRootLogin' line: 'PermitRootLogin no' - name: Restart SSH service service: name: sshd state: restarted Run the Playbook to apply and enforce settings: ansible-playbook ssh_security.yml 2. Detect Configuration Drift with Security Auditing ToolsRegularly audit system configurations to identify unauthorized changes. ✅ Use Lynis to Detect Configuration Deviations: Install Lynis: sudo apt install lynis -y Run a system security audit: sudo lynis audit system Review security recommendations and drifts in system configurations. ✅ Use diff to Compare Configuration Files: Track changes in critical configuration files: diff -u /etc/ssh/sshd_config /backup/sshd_config (This checks if /etc/ssh/sshd_config has changed from its backed-up version.) ✅ Automate Configuration Checks with a Cron Job: Schedule periodic configuration integrity checks: crontab -e Add: 0 2 * * * diff -u /etc/ssh/sshd_config /backup/sshd_config | mail -s "SSH Config Changes" admin@example.com (Runs every night at 2 AM and emails differences if found.) 3. Set Up File Integrity Monitoring (FIM) for Configuration FilesInstall AIDE (Advanced Intrusion Detection Environment): sudo apt install aide -y Initialize the AIDE database: sudo aideinit sudo mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz Run a configuration check manually: sudo aide --check Automate periodic integrity checks: sudo crontab -e Add: 0 3 * * * /usr/bin/aide --check Best Practices for Managing Configuration Drift✅ Maintain a "golden image" of secure configurations to compare against live systems. ✅ Use Git or version control to track configuration changes (git diff /etc/ssh/sshd_config). ✅ Log and monitor configuration changes (/var/log/audit.log). ✅ Automate drift detection and alerting with tools like OSSEC, Wazuh, or Splunk. By monitoring configuration drift, you detect unauthorized changes, enforce security policies, and ensure compliance—keeping your Linux servers secure and stable.
  7. You are reading Part 28 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. The Principle of Least Privilege (PoLP) ensures that users, processes, and applications only have the minimum level of access necessary to perform their tasks. This helps: ✅ Prevent accidental or intentional system damage from overly privileged accounts. ✅ Reduce the risk of privilege escalation attacks (where attackers exploit excessive permissions). ✅ Limit the impact of compromised accounts, reducing what an attacker can do. By following PoLP, you minimize security risks and increase overall system resilience. How to Implement PoLP in Linux1. Restrict sudo Access (Limit Privileged Commands)Instead of granting full sudo access, allow only specific commands per user. Edit the sudoers file securely: sudo visudo Assign specific privileges to a user: username ALL=(ALL) NOPASSWD: /bin/systemctl restart nginx (User username can restart Nginx but has no other sudo privileges.) Restrict sudo access by group: sudo groupadd limitedsudo sudo usermod -aG limitedsudo username %limitedsudo ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart apache2 (Users in the limitedsudo group can only restart Apache.) 2. Limit Permissions on Critical Files and DirectoriesRestrict write access to system files: sudo chmod 644 /etc/passwd sudo chmod 600 /etc/shadow Set user-specific file permissions using ACLs: sudo setfacl -m u:username:r /var/log/syslog (Grants username read-only access to /var/log/syslog.) 3. Apply Least Privilege to Database UsersInstead of giving full database access, assign only necessary privileges. Create a MySQL user with restricted access: CREATE USER 'dbuser'@'host' IDENTIFIED BY 'strongpassword'; GRANT SELECT, INSERT ON database.* TO 'dbuser'@'host'; (User dbuser can only SELECT and INSERT in database, not DELETE or DROP tables.) Revoke excessive privileges: REVOKE ALL PRIVILEGES ON database.* FROM 'dbuser'@'host'; 4. Enforce Least Privilege for Services and ProcessesRun applications with non-root users: For web servers (Nginx/Apache): sudo useradd -r -s /sbin/nologin webuser Update service files to use a limited user: sudo nano /etc/systemd/system/myapp.service [Service] User=webuser Group=webgroup Restart the service: sudo systemctl daemon-reload sudo systemctl restart myapp Use chroot to isolate applications: sudo chroot /var/chroot/myapp 5. Monitor Privileged Actions and Enforce Least Privilege PoliciesCheck who has sudo privileges: sudo grep 'sudo' /etc/group Log and monitor sudo usage: sudo cat /var/log/auth.log | grep sudo Use auditd to track privileged commands: sudo auditctl -w /etc/sudoers -p wa -k sudo_changes Best Practices for PoLP Implementation✅ Regularly review user and service privileges to remove unnecessary access. ✅ Use Role-Based Access Control (RBAC) where possible to enforce permissions per role. ✅ Limit access to root and admin accounts, and require MFA for administrative logins. ✅ Apply the least privilege principle to automation scripts by using dedicated service accounts. By enforcing the Principle of Least Privilege (PoLP), you reduce attack surfaces, prevent privilege escalation, and enhance overall system security.
  8. You are reading Part 27 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Regularly rotating encryption keys, passwords, and certificates reduces the risk of old, compromised credentials being used for unauthorized access. Attackers often exploit stolen or leaked credentials, so periodic rotation helps: ✅ Mitigate the impact of credential leaks by ensuring old credentials become invalid. ✅ Reduce exposure to insider threats by revoking unnecessary access. ✅ Ensure compliance with security best practices (e.g., PCI-DSS, HIPAA, ISO 27001). How to Implement Key and Credential Rotation1. Use a Credential Management System (CMS)Using a secure vault to store and manage credentials helps automate key rotation. ✅ AWS KMS (Key Management Service) for AWS resources Automatically rotates encryption keys every 12 months. Configure key policies to enforce periodic key expiration. Example command to create a new KMS key: aws kms create-key --description "Rotated Key" --key-usage ENCRYPT_DECRYPT ✅ HashiCorp Vault → Securely manage SSH keys, API tokens, and certificates. ✅ GCP Cloud KMS → Rotate Google Cloud encryption keys automatically. 2. Rotate SSH Keys RegularlyGenerate a new SSH key pair (RSA 4096-bit): ssh-keygen -t rsa -b 4096 -f ~/.ssh/new_id_rsa Copy the new public key to the remote server: ssh-copy-id -i ~/.ssh/new_id_rsa.pub username@server_ip Update SSH client config to use the new key: nano ~/.ssh/config Host server_ip IdentityFile ~/.ssh/new_id_rsa Remove the old SSH key from the server: ssh username@server_ip "rm -f ~/.ssh/old_key.pub" 3. Rotate API Keys and Database CredentialsMany APIs and cloud services support key rotation. ✅ For AWS IAM keys: aws iam create-access-key --user-name myuser aws iam delete-access-key --access-key-id OLD_KEY_ID --user-name myuser ✅ For GitHub tokens: Navigate to GitHub → Developer Settings → Personal Access Tokens Generate a new API token, then delete the old one. ✅ For MySQL database passwords: ALTER USER 'dbuser'@'localhost' IDENTIFIED BY 'new_secure_password'; FLUSH PRIVILEGES; 4. Automate Credential Expiration and RotationSet password expiration policies in Linux: sudo chage -M 90 -W 10 username (Forces password rotation every 90 days, with a 10-day warning.) Use Ansible or Terraform to automate key rotation in cloud environments. Best Practices for Credential Rotation✅ Use MFA (Multi-Factor Authentication) to protect against stolen credentials. ✅ Enforce strong passwords and use passphrase-protected SSH keys. ✅ Store rotated credentials in a secure vault (e.g., AWS Secrets Manager, HashiCorp Vault). ✅ Monitor logs (/var/log/auth.log) for unauthorized access attempts. By regularly rotating encryption keys and credentials, you reduce security risks, prevent unauthorized access, and maintain a strong security posture.
  9. You are reading Part 26 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. A Host-Based Intrusion Detection System (HIDS) continuously monitors system logs, file integrity, user activity, and network behavior to detect suspicious activity. If an attacker compromises your server, HIDS can: ✅ Detect unauthorized file modifications or privilege escalation attempts. ✅ Alert you in real-time about potential intrusions or security violations. ✅ Provide forensic logs for investigating security incidents. HIDS solutions such as OSSEC, Wazuh, and Tripwire help identify anomalies and prevent security breaches. How to Install and Configure OSSEC (HIDS) on Your Linux Server1. Install OSSEC HIDSFor Debian/Ubuntu: sudo apt install ossec-hids -y For CentOS/RHEL: sudo yum install ossec-hids -y 2. Configure OSSEC to Monitor System ActivityEdit the OSSEC configuration file: sudo nano /var/ossec/etc/ossec.conf Define monitoring rules (e.g., log files, directories, SSH activity). Example: <localfile> <log_format>syslog</log_format> <location>/var/log/auth.log</location> </localfile> (Monitors /var/log/auth.log for suspicious SSH logins.) Save and close the file. 3. Enable OSSEC AlertingSet up email alerts for security events: <global> <email_notification>yes</email_notification> <email_to>admin@example.com</email_to> <smtp_server>smtp.example.com</smtp_server> </global> (Replace admin@example.com with your email.) Restart OSSEC to apply changes: sudo systemctl restart ossec 4. Verify OSSEC is Running ProperlyCheck OSSEC status: sudo systemctl status ossec View real-time alerts: sudo cat /var/ossec/logs/alerts.log Alternative: Install Wazuh (Advanced OSSEC-Based HIDS)Wazuh is an enhanced fork of OSSEC with a web dashboard, SIEM integration, and advanced threat detection. Install Wazuh on Debian/Ubuntu: curl -sO https://packages.wazuh.com/4.x/wazuh-install.sh sudo bash wazuh-install.sh --wazuh-server Enable Wazuh monitoring: sudo systemctl start wazuh-manager Best Practices for HIDS Security✅ Monitor system logs (/var/log/auth.log, /var/log/syslog) for unusual activity. ✅ Enable real-time alerting to receive security notifications immediately. ✅ Integrate HIDS with a SIEM (e.g., ELK, Splunk, or Graylog) for central log analysis. ✅ Regularly update detection rules to stay protected against evolving threats. By deploying HIDS solutions like OSSEC or Wazuh, you can detect security threats early, prevent system compromises, and maintain a well-monitored Linux server.
  10. You are reading Part 25 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. The Domain Name System (DNS) translates domain names (e.g., example.com) into IP addresses. However, standard DNS is not secure, and attackers can manipulate DNS records to redirect users to malicious sites (a type of attack known as DNS spoofing or cache poisoning). DNSSEC (DNS Security Extensions) prevents these attacks by digitally signing DNS records, ensuring their authenticity and integrity. By enabling DNSSEC, you: ✅ Prevent DNS spoofing and cache poisoning attacks. ✅ Ensure that users reach the correct website instead of a fraudulent one. ✅ Strengthen your domain’s overall security posture. How to Enable DNSSEC on BIND (Self-Managed DNS Server)If you run your own BIND DNS server, you can enable DNSSEC validation to secure domain name resolution. Open the BIND configuration file: sudo nano /etc/bind/named.conf.options Enable DNSSEC by adding the following lines: dnssec-enable yes; dnssec-validation auto; Restart the BIND service to apply changes: sudo systemctl restart bind9 Verify DNSSEC is working by querying a signed domain: dig +dnssec example.com How to Enable DNSSEC on Cloud DNS ProvidersMost cloud-based DNS services offer one-click DNSSEC activation in their dashboard. ✅ For AWS Route 53: Go to Route 53 Console → Hosted Zones Select your domain → Enable DNSSEC Signing ✅ For Cloudflare DNS: Navigate to DNS Settings Find the DNSSEC section → Click Enable ✅ For Google Cloud DNS: Go to Cloud Console → Cloud DNS Select your DNS zone → Click Enable DNSSEC How to Test if DNSSEC is Enabled for Your DomainUse the dig command to check DNSSEC records: dig +short DNSKEY example.com (If DNSSEC is enabled, it will return cryptographic keys.) Check DNSSEC status using an online tool: Verisign DNSSEC Debugger Google Public DNS Check Best Practices for DNS Security✅ Always enable DNSSEC for all domains you own to prevent DNS hijacking. ✅ Use DNS-over-HTTPS (DoH) or DNS-over-TLS (DoT) for encrypted DNS queries. ✅ Regularly rotate DNSSEC keys to maintain security. ✅ Monitor DNS logs for signs of tampering or suspicious activity. By enabling DNSSEC, you secure your domain’s DNS records, protect users from phishing attacks, and ensure trustworthy domain resolution.
  11. You are reading Part 24 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Encryption ensures that even if attackers gain access to your data, they cannot read or modify it without the correct decryption key. This is crucial for: ✅ Protecting sensitive files from unauthorized access. ✅ Securing network communication to prevent interception. ✅ Ensuring compliance with security regulations (GDPR, HIPAA, PCI-DSS). Encryption is applied in two main areas: Data at Rest – Encrypting files, directories, and disk partitions. Data in Transit – Securing data transferred over networks. How to Encrypt Data at Rest (Stored Data)1. Encrypt Files and Directories Using eCryptfs (Ubuntu/Debian)eCryptfs is a simple file-level encryption tool that encrypts files on-the-fly. Install eCryptfs: sudo apt install ecryptfs-utils -y Enable private directory encryption for a user: sudo ecryptfs-setup-private After setup, encrypted files are stored in: ~/.Private 2. Encrypt Entire Disk or Partitions with LUKSLUKS (Linux Unified Key Setup) is the standard for full-disk encryption. Install LUKS: sudo apt install cryptsetup -y Encrypt a partition (e.g., /dev/sdb1): sudo cryptsetup luksFormat /dev/sdb1 Open and mount the encrypted partition: sudo cryptsetup luksOpen /dev/sdb1 my_secure_data sudo mkfs.ext4 /dev/mapper/my_secure_data sudo mount /dev/mapper/my_secure_data /mnt/secure Automatically unlock LUKS partitions at boot (optional): sudo nano /etc/crypttab Add: my_secure_data /dev/sdb1 none luks How to Encrypt Data in Transit (Network Traffic)1. Force HTTPS for Web Traffic (SSL/TLS Encryption)Use Let’s Encrypt to install SSL certificates for web servers. Install Certbot (Let’s Encrypt SSL tool): sudo apt install certbot -y Enable HTTPS on Nginx/Apache: sudo certbot --nginx # For Nginx sudo certbot --apache # For Apache Verify HTTPS is working: curl -I https://yourdomain.com 2. Encrypt File Transfers Using SFTP (Instead of FTP)Regular FTP is insecure. Use SFTP (SSH File Transfer Protocol) instead. To transfer files securely using SFTP: sftp username@your_server_ip To securely copy files via SCP: scp file.txt username@your_server_ip:/home/username/ 3. Secure Remote Access with a VPN (WireGuard or OpenVPN)A VPN encrypts all network traffic between your devices and the server. Install WireGuard: sudo apt install wireguard -y Generate keys: wg genkey | tee privatekey | wg pubkey > publickey Set up WireGuard configuration in /etc/wireguard/wg0.conf (Example for private networking). Best Practices for Encryption Security✅ Use strong encryption algorithms (AES-256, RSA-4096). ✅ Rotate encryption keys regularly to prevent long-term compromise. ✅ Store encryption keys securely (use hardware security modules (HSMs) if available). ✅ Monitor logs for encryption failures (journalctl -xe | grep crypt). ✅ Use end-to-end encryption when communicating over untrusted networks. By encrypting data at rest and in transit, you protect sensitive information from unauthorized access, breaches, and data leaks, ensuring a secure Linux environment.
  12. You are reading Part 23 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Rate limiting protects your Linux server from denial-of-service (DoS) attacks and brute-force login attempts by restricting the number of requests or connections an IP address can make within a certain period. By enforcing rate limits, you can: ✅ Prevent automated brute-force attacks on SSH and other services. ✅ Reduce the impact of DoS attacks by limiting excessive traffic. ✅ Ensure fair resource usage by preventing abuse from a single client. How to Implement Rate Limiting for SSH Using iptablesYou can use iptables to limit the number of new SSH connections from a single IP to 3 attempts per minute. Add a rule to track new SSH connection attempts: sudo iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set (Tracks new SSH connection attempts for rate limiting.) Block excessive attempts within a 60-second window: sudo iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 -j DROP --seconds 60 → Measures connection attempts within a 1-minute window. --hitcount 4 → If an IP attempts more than 3 connections in 60 seconds, it is blocked. Save iptables rules to persist after reboot: sudo iptables-save | sudo tee /etc/iptables.rules Alternative: Implement SSH Rate Limiting with Fail2BanFail2Ban automatically blocks IPs that exceed login attempt limits over a defined period. Install Fail2Ban (if not already installed): sudo apt install fail2ban -y # For Debian/Ubuntu sudo yum install fail2ban -y # For CentOS/RHELEdit the Fail2Ban SSH configuration: sudo nano /etc/fail2ban/jail.conf Modify the [sshd] section: [sshd] enabled = true maxretry = 3 findtime = 60 bantime = 600 maxretry = 3 → Blocks an IP after 3 failed attempts. findtime = 60 → Tracks failed login attempts within a 1-minute window. bantime = 600 → Blocks the IP for 10 minutes. Restart Fail2Ban to apply changes: sudo systemctl restart fail2ban Check Fail2Ban status and blocked IPs: sudo fail2ban-client status sshd Best Practices for Rate Limiting✅ Apply rate limits to all critical services (not just SSH) like HTTP, FTP, and APIs. ✅ Combine iptables rate limiting with Fail2Ban for layered security. ✅ Monitor logs (/var/log/auth.log) to detect brute-force attempts and fine-tune limits. ✅ Whitelist trusted IP addresses to prevent accidental blocking. By implementing rate limiting, you reduce brute-force login risks, prevent abuse, and protect your server from DoS attacks.
  13. You are reading Part 22 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. File Integrity Monitoring (FIM) ensures the security and integrity of critical system files by detecting unauthorized modifications. Attackers often modify configuration files, binaries, or logs to hide their presence, escalate privileges, or install malware. By using FIM tools like AIDE, you can: ✅ Detect unauthorized file changes caused by malware or attackers. ✅ Identify tampering with system configurations or security logs. ✅ Ensure compliance with security policies and standards. How to Set Up File Integrity Monitoring Using AIDE1. Install AIDE (Advanced Intrusion Detection Environment)For Debian/Ubuntu: sudo apt install aide -y For CentOS/RHEL: sudo yum install aide -y 2. Initialize the AIDE DatabaseThe AIDE database is a snapshot of file hashes, attributes, and permissions at a specific point in time. sudo aideinit This creates the initial baseline database. The database is stored at: /var/lib/aide/aide.db.new.gz Move the initialized database to the active database location: sudo mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz 3. Run a File Integrity CheckTo compare the current system state with the baseline: sudo aide --check If no unauthorized changes are found, the output will confirm system integrity. If modifications are detected, review and investigate them immediately. 4. Automate Regular AIDE Checks Using Cron JobsTo schedule daily integrity checks at 3:00 AM, add a cron job: sudo crontab -e Add the following line: 0 3 * * * /usr/bin/aide --check (Runs AIDE every day at 3 AM and logs any changes.) Best Practices for File Integrity Monitoring (FIM)✅ Monitor Critical Files and Directories Edit the AIDE configuration file to specify which files to monitor: sudo nano /etc/aide/aide.conf Example rule to monitor system binaries and config files: /etc/ p /bin/ p /usr/bin p /sbin/ p /var/log p p → Tracks permissions, ownership, and changes to these files. ✅ Integrate FIM with Logging and Alerts Configure FIM to send alerts when changes are detected. Combine FIM with SIEM tools (e.g., Splunk, ELK, or Wazuh) for real-time monitoring. ✅ Regularly Update the AIDE Database After legitimate updates or patches, reinitialize the database: sudo aideinit sudo mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz (Prevents false positives from authorized changes.) ✅ Use Other FIM Solutions for Advanced Security Tripwire (alternative to AIDE): sudo apt install tripwire Wazuh (SIEM with built-in FIM capabilities) Why FIM is Essential for Security🔹 Detects rootkits, malware, and unauthorized system modifications. 🔹 Helps with compliance (PCI-DSS, HIPAA, ISO 27001) by monitoring critical system files. 🔹 Provides an audit trail for forensic analysis after a security incident. By implementing File Integrity Monitoring (FIM), you can detect suspicious changes, prevent security breaches, and maintain a hardened Linux environment.
  14. You are reading Part 21 of the 57-part series: Harden and Secure Linux Servers. [Level 3] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Each open port on a server represents a potential entry point for attackers. If unnecessary ports are left open, they can be scanned, exploited, or used for unauthorized access. By limiting open ports to only essential services, you reduce the attack surface and increase overall security. Check Open Ports on Your SystemUse the following commands to list all open ports and active services: ✅ Using ss (Recommended for modern Linux systems): sudo ss -tuln✅ Using netstat (Older alternative): sudo netstat -tulnp✅ Using lsof to check which services are using ports: sudo lsof -i -P -n-tuln → Lists TCP/UDP ports in listening mode, without resolving names. Look for unexpected open ports that do not belong to required services. 1. Disable and Stop Unnecessary Services✅ Identify running services: sudo systemctl list-units --type=service --state=running✅ Stop an unnecessary service immediately: sudo systemctl stop service_name✅ Disable a service to prevent it from starting at boot: sudo systemctl disable service_name(Example: If FTP (vsftpd) is running but not needed, disable it) sudo systemctl stop vsftpd sudo systemctl disable vsftpd2. Restrict Open Ports Using a Firewall (UFW or iptables)✅ Using UFW (Uncomplicated Firewall - Ubuntu/Debian) Allow only essential ports (SSH, HTTP, HTTPS): sudo ufw allow 22 sudo ufw allow 80 sudo ufw allow 443Deny all other connections by default: sudo ufw default deny incomingEnable the firewall: sudo ufw enableVerify firewall rules: sudo ufw status✅ Using iptables (For Advanced Users) Allow SSH, HTTP, and HTTPS only: sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT sudo iptables -A INPUT -j DROPSave iptables rules permanently: sudo iptables-save | sudo tee /etc/iptables.rules3. Monitor and Audit Open Ports Regularly✅ Run periodic port scans using nmap from another machine: sudo nmap -sS -p- server_ip(Scans all open ports on the server to detect unexpected services.) ✅ Check listening services after updates or software installations: sudo ss -tuln✅ Review logs to identify unauthorized access attempts: sudo cat /var/log/auth.log | grep "Failed"Best Practices for Port Management🔹 Apply the Principle of Least Privilege (PoLP): Only expose essential ports. 🔹 Use firewalls (UFW, iptables, or Firewalld) to enforce strict port rules. 🔹 Monitor logs and perform regular security scans to detect unauthorized access. 🔹 Use port knocking to further protect SSH access (see Port Knocking section). By limiting open ports and closing unnecessary services, you reduce security risks and protect your Linux server from potential intrusions.
  15. You are reading Part 20 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Port knocking is a security technique that hides your SSH port from attackers. Instead of leaving SSH (port 22) open, the server keeps it closed by default. Only users who send a specific sequence of connection attempts ("knocks") to predefined ports will unlock SSH access, making it significantly harder for attackers to find and exploit. By implementing port knocking, you reduce the risk of brute-force attacks, automated scanners, and unauthorized SSH access. How to Set Up Port Knocking on Your Server1. Install the knockd ServiceFor Debian/Ubuntu: bashCopyEdit sudo apt install knockd -y For CentOS/RHEL: bashCopyEdit sudo yum install knockd -y 2. Configure Port Knocking RulesEdit the knockd configuration file: bashCopyEdit sudo nano /etc/knockd.conf Add the following rule to require a specific sequence of knocks to open SSH (port 22): cssCopyEdit [openSSH] sequence = 7000,8000,9000 seq_timeout = 5 command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT sequence = 7000,8000,9000 → Users must send connection requests to these ports in order. seq_timeout = 5 → The sequence must be completed within 5 seconds. command = iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT → Allows SSH access only for the knocking IP. 3. Start and Enable the Knockd ServiceStart the knockd service and enable it to run at boot: bashCopyEdit sudo systemctl start knockd sudo systemctl enable knockd 4. Test Port Knocking from a Client MachineFrom another machine, send the correct knocking sequence using knock: bashCopyEdit knock -v server_ip 7000 8000 9000 (Replace server_ip with your actual server's IP address.) Once the sequence is completed, port 22 will be opened for the client’s IP for a limited time, allowing SSH access. Try connecting to SSH: bashCopyEdit ssh username@server_ip 5. Automatically Close SSH After a Time Period (Optional)To close SSH access after a session, add a rule to /etc/knockd.conf: cssCopyEdit [closeSSH] sequence = 9000,8000,7000 seq_timeout = 5 command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT Now, after executing: bashCopyEdit knock -v server_ip 9000 8000 7000 SSH access will be closed again. Best Practices for Port Knocking Security✅ Use a strong knocking sequence (more ports, random order). ✅ Use knockd with UFW or iptables to limit access to trusted users. ✅ Combine port knocking with key-based SSH authentication for maximum security. ✅ Monitor logs (/var/log/syslog or /var/log/knockd.log) for knocking attempts. ✅ Use Single Packet Authorization ({{wiki-Port_knocking}}SPA{{/wiki}}) as a more secure alternative (e.g., fwknop). By implementing port knocking, you hide your SSH service from attackers, making it virtually invisible to port scanners and greatly reducing brute-force attack risks.
  16. You are reading Part 19 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. AppArmor (Ubuntu/Debian) and SELinux (CentOS/RHEL) are Mandatory Access Control (MAC) systems that enforce strict security policies on processes and services. Unlike traditional Linux permissions, these systems limit what processes can access, reducing the impact if an attacker compromises an application. By confining applications to a predefined set of actions and resources, AppArmor and SELinux prevent unauthorized access, privilege escalation, and file modifications. For AppArmor (Ubuntu/Debian)Check if AppArmor is enabled: sudo apparmor_status If it's not enabled, start the service: sudo systemctl enable --now apparmor List active AppArmor profiles: sudo aa-status Enforce AppArmor Profiles for Specific Services: AppArmor profiles are stored in /etc/apparmor.d/. To create a profile for Nginx: sudo nano /etc/apparmor.d/usr.sbin.nginx Define restricted access rules (example for Nginx): /usr/sbin/nginx { include <abstractions/base> /var/www/html/** r, /etc/nginx/nginx.conf r, /var/log/nginx/** rw, } Save and reload the profile: sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.nginx For SELinux (CentOS/RHEL)Check SELinux status: sestatus If disabled, enable it: sudo setenforce 1 List current SELinux policies: sudo semanage boolean -l Apply SELinux Policies to Restrict Services: Example: Restrict access to a web directory for Apache (httpd) sudo semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" Apply the policy: sudo restorecon -Rv /web Set SELinux to Enforcing Mode (Recommended for Security): sudo setenforce 1 Best Practices for AppArmor & SELinux Security✅ Use AppArmor for lightweight MAC on Ubuntu/Debian (easier to configure). ✅ Use SELinux for fine-grained access control on CentOS/RHEL (stricter policies). ✅ Regularly audit security logs (/var/log/audit/audit.log for SELinux). ✅ Test policies before enforcing (setenforce 0 puts SELinux in permissive mode). ✅ Use audit2allow to generate new SELinux policies for denied actions: sudo cat /var/log/audit/audit.log | audit2allow -M my_policy sudo semodule -i my_policy.pp By enforcing AppArmor or SELinux, you limit application access to system resources, reducing the risk of exploits, privilege escalation, and malware infections, making your Linux server significantly more secure.
  17. You are reading Part 18 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. The sudo command grants elevated privileges, allowing users to execute system-critical commands. If an unauthorized or compromised user gains sudo access, they can modify system files, install malware, or escalate privileges, leading to a full system takeover. By limiting sudo access to only trusted users and necessary commands, you minimize the risk of privilege escalation attacks and enhance system security. 1. Edit the sudoers File SecurelyTo modify sudo permissions, always use visudo, which checks for syntax errors before saving: sudo visudo This opens the /etc/sudoers file for editing. 2. Grant sudo Access to Specific Users for Specific CommandsInstead of giving full sudo access, allow users to execute only necessary commands: Example: Allow username to restart Apache without a password: username ALL=(ALL) NOPASSWD: /bin/systemctl restart apache2 (The user can restart Apache but nothing else.) Example: Allow developer to only edit web server configs: developer ALL=(ALL) NOPASSWD: /bin/nano /etc/nginx/nginx.conf (The user can edit the Nginx config but cannot modify other system files.) 3. Restrict sudo Access by User Groups (Recommended Approach)Instead of assigning permissions to individual users, create a specific sudo group: Create a sudo-restricted group (e.g., limitedsudo): sudo groupadd limitedsudo Add users to the group: sudo usermod -aG limitedsudo username Grant sudo access to the group only for specific tasks: %limitedsudo ALL=(ALL) NOPASSWD: /bin/systemctl restart nginx (Only users in limitedsudo can restart Nginx, nothing else.) 4. Audit sudo Privileges RegularlyCheck which users have sudo access: sudo grep 'sudo' /etc/group View sudo logs to track user activity: sudo cat /var/log/auth.log | grep sudo Remove sudo access for unauthorized users: sudo deluser username sudo Best Practices for Securing sudo Access✅ Apply the Principle of Least Privilege (PoLP) → Only grant minimum necessary privileges. ✅ Use timestamped sudo sessions → Set session expiration so users must re-enter passwords (Defaults timestamp_timeout=5). ✅ Enable sudo logging → Monitor logs for suspicious activity (journalctl -xe | grep sudo). ✅ Disable root login (PermitRootLogin no in /etc/ssh/sshd_config). ✅ Require multi-factor authentication (MFA) for sudo (auth required pam_google_authenticator.so in /etc/pam.d/sudo). By restricting sudo access, limiting privileges, and auditing user activity, you prevent privilege escalation attacks and secure critical system operations.
  18. You are reading Part 17 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Network segmentation enhances security by isolating critical services and limiting unnecessary communication between different parts of your infrastructure. If an attacker gains access to one system, segmentation prevents them from easily reaching sensitive data or services. By separating workloads and restricting traffic, you reduce attack surfaces, minimize exposure, and improve overall security. 1. Use Virtual Private Clouds (VPCs) and Subnets (For Cloud Environments)In AWS, Google Cloud, or Azure, create separate VPCs for different services (e.g., one for web servers, another for databases). Assign private subnets to sensitive resources so they are not publicly accessible. Use security groups and network ACLs to define which systems can communicate. 2. Configure Firewalls to Control Traffic Between ServicesFirewalls can enforce segmentation by allowing only specific traffic between different zones. For UFW (Ubuntu/Debian Firewall): sudo ufw allow from 192.168.1.0/24 to any port 3306 # Allow MySQL access only from a trusted subnet sudo ufw allow from 192.168.2.0/24 to any port 22 # Allow SSH access only from a trusted subnet sudo ufw enable For Firewalld (CentOS/RHEL Firewall): sudo firewall-cmd --permanent --zone=internal --add-service=mysql sudo firewall-cmd --permanent --zone=external --remove-service=mysql sudo firewall-cmd --reload (This allows MySQL traffic only in the internal zone, blocking it externally.) 3. Use iptables to Restrict Traffic (For On-Premise Linux Servers)You can define strict rules using iptables to allow only necessary connections. Example: Restrict SSH access to trusted IPs only sudo iptables -A INPUT -p tcp -s 192.168.1.100 --dport 22 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 22 -j DROP (Only the IP 192.168.1.100 can access SSH; all others are blocked.) Example: Allow database access only from the application server sudo iptables -A INPUT -p tcp -s 192.168.1.200 --dport 3306 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 3306 -j DROP (Only 192.168.1.200 can connect to MySQL, blocking all other IPs.) 4. Implement VLANs for Physical Network SegmentationIf using physical networking, VLANs (Virtual Local Area Networks) help separate different network segments at the switch level. Assign different VLANs for different teams (e.g., finance, development, IT). Use VLAN tagging to control traffic flow. Ensure VLANs cannot communicate directly unless explicitly allowed by firewall rules. Best Practices for Network Segmentation✅ Apply the principle of least privilege (PoLP) → Only allow necessary network connections. ✅ Keep public-facing services isolated from internal services (e.g., separate web servers from databases). ✅ Use logging and monitoring to detect unauthorized traffic attempts. ✅ Regularly audit firewall rules to remove unnecessary access. ✅ Combine segmentation with VPNs for secure internal communication. By implementing network segmentation, you limit the impact of security breaches, protect sensitive resources, and enhance overall security across your infrastructure.
  19. You are reading Part 16 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. 1. Install Google Authenticator on Your Linux Server:sudo apt install libpam-google-authenticator -y # Debian/Ubuntu sudo yum install google-authenticator -y # CentOS/RHEL2. Configure MFA for Your User Account:Run the following command for the user account you want to secure: google-authenticator You will be prompted to answer setup questions (choose "yes" for time-based authentication). The system will generate a QR code and secret key. Scan the QR code with the Google Authenticator app (available for Android & iOS). Save the backup codes in case you lose access to your device. 3. Enable MFA in SSH (Pluggable Authentication Module - PAM):Edit the PAM SSH configuration file: sudo nano /etc/pam.d/sshd Add the following line at the end of the file: auth required pam_google_authenticator.so Save and close the file. 4. Configure SSH to Require MFA:Edit the SSH configuration file: sudo nano /etc/ssh/sshd_config Find and modify the following line: ChallengeResponseAuthentication yes Save and close the file. 5. Restart SSH to Apply Changes:sudo systemctl restart sshd Testing MFAOpen a new terminal and try logging in via SSH: ssh username@your_server_ip After entering your password, you will be prompted for a verification code from the Google Authenticator app. ✅ If the login succeeds after entering the MFA code, MFA is working correctly! Additional MFA Security Enhancements✅ Enforce MFA for sudo commands (Optional but recommended) sudo nano /etc/pam.d/sudo Add this line: auth required pam_google_authenticator.so (This requires an MFA code before executing sudo commands.) ✅ Allow only specific users to use MFA for SSH Instead of requiring MFA for all users, limit it to specific users by using: Match User yourusername AuthenticationMethods publickey,password publickey,keyboard-interactive (Replace yourusername with the actual username.) ✅ Use hardware-based MFA tokens (YubiKey, Duo Security, etc.) Instead of using Google Authenticator, consider Duo MFA or YubiKey for added security. Best Practices for MFA Security🔹 Ensure you have backup recovery codes in case you lose access to your device. 🔹 Require MFA for all privileged accounts (root, sudo users, admin accounts). 🔹 Monitor failed authentication attempts using: sudo cat /var/log/auth.log | grep "Failed" 🔹 Use MFA alongside key-based SSH authentication for maximum security. By enabling Multi-Factor Authentication (MFA), you add an extra level of protection to your Linux server, making it significantly harder for attackers to gain unauthorized access.
  20. You are reading Part 15 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Install ClamAV (Antivirus for Linux): sudo apt install clamav -y # Debian/Ubuntu sudo yum install clamav -y # CentOS/RHELUpdate the ClamAV virus database: sudo freshclam (Ensures the latest virus definitions are downloaded.) Run a full system scan: sudo clamscan -r / (Scans the entire system recursively.) Scan a specific directory: sudo clamscan -r /home (Scans only the /home directory.) Automatically remove infected files: sudo clamscan -r --remove /home (Use with caution to avoid deleting critical files.) Additional Malware Protection Measures✅ Use RKHunter to Detect Rootkits Install: sudo apt install rkhunter -y Update database: sudo rkhunter --update Run a scan: sudo rkhunter --check --sk (Detects rootkits and suspicious system modifications.) ✅ Use Malware Scanners for Web Servers (e.g., Linux Servers Running Websites) Maldet (Linux Malware Detect) sudo apt install maldet -y sudo maldet --update sudo maldet --scan-all /var/www(Scans and detects malware in website files.) ✅ Monitor and Prevent Cryptojacking Attacks Check for unusual CPU usage: top Block cryptojacking scripts with a browser extension if using a GUI. Use ps to find unauthorized mining processes: ps aux | grep -i crypto Best Practices for Malware Prevention🔹 Regularly scan your system with ClamAV and RKHunter. 🔹 Keep your OS and applications updated to patch vulnerabilities. 🔹 Avoid running untrusted scripts and use digital signatures for software verification. 🔹 Use firewall rules (ufw or iptables) to block unwanted traffic. 🔹 Monitor logs (/var/log/syslog, /var/log/auth.log) for unusual activity. By implementing regular malware scans and proactive security measures, you reduce the risk of infections and ensure your Linux server remains clean and secure. Multi-Factor Authentication (MFA) strengthens server security by requiring a second form of verification in addition to a password. Even if an attacker steals or guesses your password, they cannot log in without the MFA code, significantly reducing the risk of unauthorized access. By enabling MFA, you add an extra layer of security to protect against brute-force attacks, stolen credentials, and phishing attempts.
  21. You are reading Part 14 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Install Lynis (a powerful Linux security auditing tool): sudo apt install lynis -y # For Debian/Ubuntu sudo yum install lynis -y # For CentOS/RHELRun a system security audit: sudo lynis audit system Review the security report: Lynis provides a detailed security assessment, including: ✅ System hardening recommendations ✅ Unpatched vulnerabilities ✅ Weak SSH configurations ✅ File permission issues The final report will include a hardening score and security improvement suggestions. Additional Security Scanning Tools✅ Chkrootkit – Scan for Rootkits Install: sudo apt install chkrootkit -y Run a scan: sudo chkrootkit (Detects signs of rootkits and backdoors.) ✅ ClamAV – Scan for Malware Install: sudo apt install clamav -y Update virus definitions: sudo freshclam Scan the system: sudo clamscan -r /home (Detects malicious files and threats.) ✅ RKHunter – Scan for Rootkits and Malicious Programs Install: sudo apt install rkhunter -y Update database: sudo rkhunter --update Run a scan: sudo rkhunter --check --sk (Checks for suspicious files, hidden processes, and malware.) ✅ Nmap – Scan for Open Ports and Network Vulnerabilities Install: sudo apt install nmap -y Scan the server for open ports: sudo nmap -sS -sV server_ip (Helps identify unnecessary open ports that may be security risks.) Best Practices for Security Scanning🔹 Schedule regular security scans using cron jobs. 🔹 Apply security patches immediately after vulnerabilities are detected. 🔹 Combine multiple tools for a comprehensive security assessment. 🔹 Monitor system logs (/var/log/auth.log) for suspicious activity. By regularly scanning your Linux server, you can identify security weaknesses, fix vulnerabilities, and proactively protect your system against cyber threats. While Linux is generally more secure than other operating systems, it is not immune to malware. Servers that interact with the internet, share files, or run untrusted software are at risk of infections, including: ✅ Viruses – Malicious code that can spread across files. ✅ Rootkits – Hidden tools used by attackers to maintain access. ✅ Trojans – Malicious programs disguised as legitimate software. ✅ Cryptojacking scripts – Malware that hijacks your CPU for cryptocurrency mining. Regular malware scanning and proactive protection help prevent security breaches and data loss.
  22. You are reading Part 13 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. Resource limits prevent system abuse by restricting how many processes a user can run at once. Without limits, a malicious user or process could spawn excessive tasks, leading to denial-of-service (DoS) attacks, crashes, or server slowdowns. By defining user resource limits, you ensure stable performance and system reliability while mitigating potential attacks. Open the resource limits configuration file: sudo nano /etc/security/limits.conf Set user process limits by adding the following lines: * soft nproc 4096 * hard nproc 8192 soft nproc 4096 → Limits the number of processes a user can start at once to 4096. hard nproc 8192 → Sets a maximum limit of 8192 processes per user. Save and close the file. Additional Resource Limiting Methods✅ Restrict Maximum Open Files (File Descriptors) Open /etc/security/limits.conf: sudo nano /etc/security/limits.conf Add: * soft nofile 10240 * hard nofile 20480 (Limits the number of open files per user.) ✅ Apply Limits System-Wide (For All Users) via PAM sudo nano /etc/pam.d/common-session Add: session required pam_limits.so ✅ Modify Kernel-Level Limits (For System-Wide Control) sudo nano /etc/sysctl.conf Add: kernel.threads-max = 100000 fs.file-max = 2097152 Apply changes immediately: sudo sysctl -p ✅ Verify Limits for a User: ulimit -a Best Practices for Resource Limits🔹 Set reasonable limits to balance performance and security. 🔹 Monitor resource usage using htop or top to detect unusual activity. 🔹 Adjust limits based on workload to prevent bottlenecks or over-restriction. By enforcing resource limits, you prevent DoS attacks, improve system stability, and protect critical server resources, ensuring smooth and secure operations. Security misconfigurations and vulnerabilities can leave your system exposed to attacks. Security scanners help identify weaknesses before attackers exploit them, ensuring your server is properly hardened. By regularly scanning your system, you can detect misconfigurations, outdated software, weak security settings, and other risks, allowing you to take corrective action before it's too late.
  23. You are reading Part 12 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. 1. Use rsync for File Backupsrsync is a powerful tool that syncs files and directories between locations. Backup to a local directory: rsync -av --delete /important_data /backup_location -a → Preserves file attributes (permissions, timestamps). -v → Enables verbose output. --delete → Removes deleted files from the backup to match the source. Backup to a remote server: rsync -avz /important_data username@remote_server:/backup_location (Replace username@remote_server with your backup server details.) Automate backups with a cron job: crontab -e Add the following line to schedule daily backups at 2 AM: 0 2 * * * rsync -av --delete /important_data /backup_location 2. Use tar for Compressed BackupsIf you prefer compressed backups, use tar: Create a compressed archive: tar -czvf /backup_location/backup_$(date +%F).tar.gz /important_data (This creates a timestamped backup file.) Automate it with a cron job: 0 3 * * * tar -czvf /backup_location/backup_$(date +%F).tar.gz /important_data (Runs daily at 3 AM.) 3. Use Timeshift for System Snapshots (Desktop & Server Users)If you're using Ubuntu/Debian, Timeshift allows automated system snapshots for quick recovery. Install Timeshift: sudo apt install timeshift -y Open the configuration: sudo timeshift --create --comments "Daily Backup" --tags D Set up automated snapshots: sudo timeshift --schedule 4. Use Cloud-Based Backup Solutions (Optional)For extra security, offsite backups are recommended: ✅ Google Drive / Dropbox / OneDrive → Use rclone to sync: rclone sync /important_data remote:backup_location ✅ AWS S3 / Google Cloud Storage / Backblaze B2 → Automate uploads using CLI tools. ✅ Dedicated Backup Servers → Use tools like BorgBackup or Duplicity for encrypted backups. Best Practices for Backup Security✅ Use at least the 3-2-1 backup rule: 3 copies of your data 2 different storage types (local + remote) 1 offsite backup (cloud or external drive) ✅ Encrypt sensitive backups using gpg or openssl: tar -czvf - /important_data | openssl enc -aes-256-cbc -e -out backup.tar.gz.enc (Encrypts the backup with AES-256 encryption.) ✅ Test backups regularly to verify they work: tar -tzvf /backup_location/backup_2024-01-18.tar.gz (Lists the contents of a backup to confirm integrity.) By scheduling automated backups, encrypting sensitive data, and storing copies offsite, you protect your system from data loss and ensure business continuity in case of an emergency.
  24. You are reading Part 11 of the 57-part series: Harden and Secure Linux Servers. [Level 2] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. The Linux kernel is responsible for managing system operations, including network communication and security policies. Attackers often exploit weak kernel settings to perform DDoS attacks, IP spoofing, and other network-based intrusions. Hardening kernel parameters helps mitigate these risks by enforcing strict security controls on network behavior. Open the kernel configuration file: sudo nano /etc/sysctl.conf Add the following security settings to improve network protection: net.ipv4.tcp_syncookies = 1 # Enables SYN cookies to prevent SYN flood attacks net.ipv4.conf.all.rp_filter = 1 # Enables reverse path filtering to prevent IP spoofing net.ipv4.conf.default.accept_source_route = 0 # Disables source routing (prevents malicious rerouting) Save and close the file. Apply the changes immediately: sudo sysctl -p Additional Kernel Hardening Settings for Security✅ Disable ICMP (Ping) Requests (Prevents basic DDoS attacks like Smurf attacks) echo "net.ipv4.icmp_echo_ignore_all = 1" | sudo tee -a /etc/sysctl.conf ✅ Prevent IP Forwarding (Stops your server from being used as a router) echo "net.ipv4.ip_forward = 0" | sudo tee -a /etc/sysctl.conf ✅ Restrict Core Dumps (Prevents sensitive memory leaks) echo "fs.suid_dumpable = 0" | sudo tee -a /etc/sysctl.conf ✅ Enable Address Space Layout Randomization (ASLR) (Protects against memory-based attacks) echo "kernel.randomize_va_space = 2" | sudo tee -a /etc/sysctl.conf Best Practices for Kernel Security🔹 Regularly update the kernel to apply security patches (sudo apt update && sudo apt upgrade -y). 🔹 Use a security-focused kernel module like AppArmor or SELinux for additional protection. 🔹 Monitor logs (sudo journalctl -k) to check for kernel security warnings. By hardening kernel parameters, you enhance system security, protect against network-based attacks, and reinforce server defenses against unauthorized access. Backups are crucial for disaster recovery. Whether it’s a cyberattack, accidental file deletion, hardware failure, or corruption, having regularly scheduled backups ensures that your critical data and system configurations can be restored quickly with minimal downtime.
  25. You are reading Part 10 of the 57-part series: Harden and Secure Linux Servers. [Level 1] This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation. To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations. The SSH service (Secure Shell) is a critical entry point for remote server management, but leaving default settings unchanged makes it vulnerable to brute-force attacks, unauthorized access, and exploits. Configuring SSH properly helps harden your server by limiting login options and restricting access to trusted users. How to Secure SSH ConfigurationOpen the SSH configuration file: sudo nano /etc/ssh/sshd_configModify the following settings to improve security: Port 2222 # Change the default SSH port PasswordAuthentication no # Disable password login (use SSH keys instead) Protocol 2 # Use SSH protocol 2 only (more secure)Change the default SSH port (e.g., from 22 to 2222) to make brute-force attacks less likely. Disable password authentication to enforce SSH key-based login. Ensure SSH Protocol 2 is used, as Protocol 1 has known vulnerabilities. Save and close the file. Restart SSH to apply the changes: sudo systemctl restart sshdAdditional SSH Security Enhancements✅ Limit SSH access to specific users: Add the following line to /etc/ssh/sshd_config: AllowUsers yourusernameReplace yourusername with your actual SSH username to restrict access to only approved users. ✅ Enable SSH Rate Limiting with Fail2Ban: If Fail2Ban is installed, configure SSH protection in /etc/fail2ban/jail.conf: [sshd] enabled = true maxretry = 5 bantime = 3600(Blocks IPs after 5 failed login attempts for one hour.) ✅ Disable root login: Ensure this line is set in /etc/ssh/sshd_config: PermitRootLogin no✅ Restrict SSH access to trusted IP addresses (optional but highly recommended): Edit firewall rules to allow SSH only from specific IPs: sudo ufw allow from your_trusted_ip to any port 2222Replace your_trusted_ip with your actual IP address. Best Practices for SSH Security:🔹 Use key-based authentication instead of passwords for SSH access. 🔹 Change the SSH port to a non-standard number (above 1024 but below 65535). 🔹 Monitor SSH login attempts with sudo cat /var/log/auth.log | grep "sshd". 🔹 Use multi-factor authentication (MFA) for added security (e.g., Google Authenticator). By securing SSH configurations, you reduce attack risks, prevent brute-force login attempts, and enhance overall server security.

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.