Jump to content

Featured Replies

Posted

You are reading Part 56 of the 57-part series: Harden and Secure Linux Servers. [Level 6]

This series covers progressive security measures, from fundamental hardening techniques to enterprise-grade defense strategies. Each article delves into a specific security practice, explaining its importance and providing step-by-step guidance for implementation.

To explore more security best practices, visit the main guide for a full breakdown of all levels and recommendations.

A Data Retention Policy determines how long data is stored before being deleted. It helps:

Reduce storage costs - Prevents unnecessary accumulation of old data.
Minimize security risks - Lowers the risk of data leaks and breaches by removing unneeded data.
Ensure regulatory compliance - Meets GDPR, HIPAA, PCI-DSS, ISO 27001 and other legal requirements.
Improve data management - Helps organize data lifecycle and prevent clutter.

🔹 By enforcing a structured data retention policy, you ensure better security, compliance, and efficiency in data management.

How to Implement a Data Retention Policy on a Linux Server

1. Define Retention Periods Based on Compliance and Business Needs

Different types of data require different retention periods.

📌 Example Data Retention Policy:

Data Type

Retention Period

Reason

System Logs

90 Days

Security & troubleshooting

Web Server Logs

180 Days

Performance monitoring

Financial Records

7 Years

Legal & tax compliance (IRS, SOX)

Customer Data

5 Years

GDPR, HIPAA regulations

Employee Records

6 Years

HR compliance

Backups

30-90 Days

Disaster recovery

🔹 Customize retention policies based on industry regulations and internal policies.

2. Set Up Automated Data Deletion Using Cron Jobs

For on-premises servers, use cron jobs to automate the deletion of old files.

A. Delete Log Files Older Than 90 Days

Add a cron job to automatically delete old log files:

sudo crontab -e

Add the following line:

0 3 * * * find /var/log -type f -mtime +90 -name "*.log" -delete

🔹 This runs every night at 3 AM and removes log files older than 90 days.

B. Automatically Purge Database Records Older Than X Days

For MySQL/PostgreSQL databases, use scheduled jobs to delete old records.

MySQL Example:
DELETE FROM user_logs WHERE log_date < NOW() - INTERVAL 90 DAY;

To schedule it:

CREATE EVENT delete_old_logs
ON SCHEDULE EVERY 1 DAY
DO DELETE FROM user_logs WHERE log_date < NOW() - INTERVAL 90 DAY;

🔹 This removes logs older than 90 days automatically.

PostgreSQL Example:
DELETE FROM audit_logs WHERE created_at < NOW() - INTERVAL '90 days';
3. Implement Cloud Storage Lifecycle Policies

For cloud-based data retention, use automated lifecycle policies.

A. AWS S3: Set Up Lifecycle Policies for Automatic Data Expiry
  1. Open AWS S3 Console → Select Bucket → Go to Management tab.

  2. Click Create Lifecycle Rule → Name the rule (e.g., "Delete Old Backups").

  3. Set Expiration → Delete objects after 30, 60, or 90 days.

  4. Save the rule → AWS automatically deletes expired files.

To set lifecycle rules via CLI:

aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration '{
    "Rules": [
        {
            "ID": "DeleteOldData",
            "Prefix": "logs/",
            "Status": "Enabled",
            "Expiration": { "Days": 90 }
        }
    ]
}'

🔹 This auto-deletes files in the "logs/" folder after 90 days.

4. Ensure Secure Data Deletion (Prevent Data Recovery)

Simply deleting a file does not remove it permanently—it can be recovered.

A. Use Secure Deletion Tools for Files
  1. Shred files before deletion to prevent recovery:

    shred -u -z /var/log/old_log.log
  2. Wipe entire directories securely:

    rm -rf --preserve-root /important_dir/
    wipe -rf /important_dir/
  3. For SSDs, use blkdiscard to securely wipe data:

    blkdiscard /dev/sdx
B. Securely Delete Database Records

To overwrite data before deletion in MySQL:

UPDATE users SET email='deleted@example.com', password='null' WHERE last_active < NOW() - INTERVAL 5 YEAR;
DELETE FROM users WHERE last_active < NOW() - INTERVAL 5 YEAR;

🔹 This ensures no sensitive data remains after deletion.

5. Maintain Audit Logs for Data Retention Enforcement

Track who accessed, modified, or deleted data for compliance.

A. Log Data Deletion Activities

Enable logging for file deletions:

auditctl -w /var/log -p wa -k log_delete

Check audit logs:

ausearch -k log_delete --start today
B. Monitor Deleted Database Records

Log database deletions:

CREATE TRIGGER log_deletions
BEFORE DELETE ON users
FOR EACH ROW
INSERT INTO deleted_records (user_id, deleted_at) VALUES (OLD.id, NOW());

🔹 Now, deleted database records are logged before removal.

Best Practices for Data Retention and Deletion

Define clear retention periods for different types of data.
Use automated deletion via cron jobs or lifecycle policies.
Securely erase sensitive data to prevent recovery.
Regularly audit and update retention policies to stay compliant.
Monitor and log deletions to track policy enforcement.

By implementing a structured Data Retention Policy, you reduce security risks, ensure compliance, and optimize storage usage, helping maintain a secure and efficient data management system.

  • Views 61
  • Created
  • Last Reply

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.