Jump to content

Welcome to CodeNameJessica

Welcome to CodeNameJessica!

💻 Where tech meets community.

Hello, Guest! 👋
You're just a few clicks away from joining an exclusive space for tech enthusiasts, problem-solvers, and lifelong learners like you.

🔐 Why Join?
By becoming a member of CodeNameJessica, you’ll get access to:
In-depth discussions on Linux, Security, Server Administration, Programming, and more
Exclusive resources, tools, and scripts for IT professionals
A supportive community of like-minded individuals to share ideas, solve problems, and learn together
Project showcases, guides, and tutorials from our members
Personalized profiles and direct messaging to collaborate with other techies

🌐 Sign Up Now and Unlock Full Access!
As a guest, you're seeing just a glimpse of what we offer. Don't miss out on the complete experience! Create a free account today and start exploring everything CodeNameJessica has to offer.

CodeName Blogs

Featured Entries

Our community blogs

  1. Linux News

    A blog by Blogger in CodeName Blogs
    • 31 Entries
    • 0 Comments
    • 1197 Views
    By: Janus Atienza
    Mon, 21 Apr 2025 16:36:45 +0000


    Microsoft MS SQL server supports Linux operating systems, including Red Hat Enterprise Linux, Ubuntu, and container images on Virtual machine platforms like Kubernetes, Docker engine, and OpenShift. Regardless of the platform on which you are using SQL Server, the databases are prone to corruption and inconsistencies. If your MDF/NDF files on a Linux system get corrupted for any reason, you can repair them. In this post, we’ll discuss the procedure to repair and restore a corrupt SQL database on a Linux system.

    Causes of corruption in MDF/NDF files in Linux:

    The SQL database files stored in Linux system can get corrupted due to one of the following reasons:

    • Sudden system shutdown.
    • Bugs in the Server
    • The system’s hard drive, where the database files are saved, has bad sectors.
    • The operating system suddenly crashes at the time you are working on the database.
    • Hardware or malware infection.
    • The system runs out of space.

    Ways to repair and restore corrupt SQL databases in Linux

    To repair the corrupt SQL database file stored on Linux system, you can use SQL Server management studio on Ubuntu or Red hat enterprise itself Or use a professional SQL repair tool.

    Steps to repair a corrupt SQL database on a Linux system:

    • First, launch the SQL Server on your Linux system by the below steps:
    • Open the terminal by Ctrl+Alt+T or ALT +F2
    • Next, run the command below with the application name and then press the Enter key.

    sudo systemctl start mssql-server

    • In SSMS, follow the below steps to restore and repair the database file on Linux system:

    Step 1- If you have an updated Backup file, you can use it to restore the corrupt Database. Here’s the command:

    BACKUP DATABASE [AdventureWorks2019] TO DISK = N’C:\backups\DBTesting.bak’ WITH DIFFERENTIAL, NOFORMAT, NOINIT, NAME = N’AdventureWorks2019-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10

    GO

    Step 2- If you have no backup, then, with Admin rights, run the DBCC CHECKDB command on SQL Server Management Studio (SSMS). Here the corrupted database name is “DBTesting”. Before using the command, first change the status to SET SINGLE_USER. Here is the command:

    ALTER DATABASE DBTesting SET SINGLE_USER

    DBCC CHECKDB (‘DBTesting’, REPAIR_REBUILD)

    GO

    • alter databaseIf REPAIR_REBUILD tool fails to repair the problematic MDF file then you can try the below REPAIR_ALLOW_DATA_LOSS command of DBCC CHECKDB command:

    DBCC CHECKDB (N ’Dbtesting’, REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS, NO_INFOMSGS;

    GO

    • repair rebuild
Next, change the mode of the database from SINGLE_USER to MULTI_USER by executing the below command:

    ALTER DATABASE DBTesting SET MULTI_USER

    set multi userUsing the above command can help you repair corrupt MDF file but it may removes majority of the data pages containing inconsistent data while repairing. Due to which, you can lose your data.

    Step 3-Use a Professional SQL Repair tool:

    If you don’t want to risk data in database then install a professional MS SQL recovery tool such as Stellar Repair for MSSQL. The tool is equipped with enhanced algorithms that can help you repair corrupt or inconsistent MDF/NDF file even in Linux system. Here are the steps to install and launch Stellar Repair for MS SQL:

    • First open Terminal on Linux/Ubuntu
    • Next, run the below command:

    $ sudo apt install app_name  

    Here: Add the absolute path of the Stellar Repair for MSSQL tool.

    • Next, launch the application in your Ubuntu using the below steps:
    • On your desktop, find, and click
    • In the Activities overview window, locate the Stellar Repair for MS SQL application and press the Enter key.
    • Enter the system password to authenticate.
    • Next, select the database in Stellar Repair for MS SQL’s user interface by clicking on Select Database.

    To Conclude

    If you are working on an SQL Server installed on a Linux system on the Virtual machine, your system suddenly crashes and the MDF file gets corrupted. In this case or any other scenarios where the SQL database file become inaccessible on Linux system, you can repair it using the two methods described above. To repair corrupt MDF files quickly, without data loss and file size restrictions, you can use the help of a professional MS SQL Repair tool. The tool supports repairing MDF files in both Windows and Linux systems.

    The post Linux SQL Server Database Recovery: Restoring Corrupt Databases appeared first on Unixmen.

    Recent Entries

  2. Jessica Brown

    • 47 Entries
    • 0 Comments
    • 4133 Views

    SaltStack (SALT): A Comprehensive Overview

    SaltStack, commonly referred to as SALT, is a powerful open-source infrastructure management platform designed for scalability. Leveraging event-driven workflows, SALT provides an adaptable solution for automating configuration management, remote execution, and orchestration across diverse infrastructures.

    This document offers an in-depth guide to SALT for both technical teams and business stakeholders, demystifying its features and applications.

    What is SALT?

    SALT is a versatile tool that serves multiple purposes in infrastructure management:

    • Configuration Management Tool (like Ansible, Puppet, Chef): Automates the setup and maintenance of servers and applications.

    • Remote Execution Engine (similar to Fabric or SSH): Executes commands on systems remotely, whether targeting a single node or thousands.

    • State Enforcement System: Ensures systems maintain desired configurations over time.

    • Event-Driven Automation Platform: Detects system changes and triggers actions in real-time.

    Key Technologies:

    • YAML: Used for defining states and configurations in a human-readable format.

    • Jinja: Enables dynamic templating for YAML files.

    • Python: Provides extensibility through custom modules and scripts.

    Supported Architectures

    SALT accommodates various architectures to suit organizational needs:

    • Master/Minion: A centralized control model where a Salt Master manages Salt Minions to send commands and execute tasks.

    • Masterless: A decentralized approach using salt-ssh to execute tasks locally without requiring a master node.

    Core Components of SALT

    Component

    Description

    Salt Master

    Central control node that manages minions, sends commands, and orchestrates infrastructure tasks.

    Salt Minion

    Agent installed on managed nodes that executes commands from the master.

    Salt States

    Declarative YAML configuration files that define desired system states (e.g., package installations).

    Grains

    Static metadata about a system (e.g., OS version, IP address), useful for targeting specific nodes.

    Pillars

    Secure, per-minion data storage for secrets and configuration details.

    Runners

    Python modules executed on the master to perform complex orchestration tasks.

    Reactors

    Event listeners that trigger actions in response to system events.

    Beacons

    Minion-side watchers that emit events based on system changes (e.g., file changes or CPU spikes).

    Key Features of SALT

    Feature

    Description

    Agent or Agentless

    SALT can operate in agent (minion-based) or agentless (masterless) mode.

    Scalability

    Capable of managing tens of thousands of nodes efficiently.

    Event-Driven

    Reacts to real-time system changes via beacons and reactors, enabling automation at scale.

    Python Extensibility

    Developers can extend modules or create custom ones using Python.

    Secure

    Employs ZeroMQ for communication and AES encryption for data security.

    Role-Based Config

    Dynamically applies configurations based on server roles using grains metadata.

    Granular Targeting

    Targets systems using name, grains, regex, or compound filters for precise management.

    Common Use Cases

    SALT is widely used across industries for tasks like:

    • Provisioning new systems and applying base configurations.

    • Enforcing security policies and managing firewall rules.

    • Installing and enabling software packages (e.g., HTTPD, Nginx).

    • Scheduling and automating patching across multiple environments.

    • Monitoring logs and system states with automatic remediation for issues.

    • Managing VM and container lifecycles (e.g., Docker, LXC).

    Real-World Examples

    1. Remote Command Execution:

      • salt '*' test.ping (Pings all connected systems).

      • salt 'web*' cmd.run 'systemctl restart nginx' (Restarts Nginx service on all web servers).

    2. State File Example (YAML):

      nginx:
        pkg.installed: []
        service.running:
          - enable: True
          - require:
            - pkg: nginx
      

    Comparing SALT to Other Tools

    Feature

    Salt

    Ansible

    Puppet

    Chef

    Language

    YAML + Python

    YAML + Jinja

    Puppet DSL

    Ruby DSL

    Agent Required

    Optional

    No

    Yes

    Yes

    Push/Pull

    Both

    Push

    Pull

    Pull

    Speed

    Very Fast

    Medium

    Medium

    Medium

    Scalability

    High

    Medium-High

    Medium

    Medium

    Event-Driven

    Yes

    No

    No

    Limited

    Security Considerations

    SALT ensures secure communication and authentication:

    • Authentication: Uses public/private key pairs to authenticate minions.

    • Encryption: Communicates via ZeroMQ encrypted with AES.

    • Access Control: Defines granular controls using Access Control Lists (ACLs) in the Salt Master configuration.

    Additional Information

    For organizations seeking enhanced usability, SaltStack Config offers a graphical interface to streamline workflow management. Additionally, SALT's integration with VMware Tanzu provides advanced automation for enterprise systems.

    Installation Example

    On a master node (e.g., RedHat):

    sudo yum install salt-master
    

    On minion nodes:

    sudo yum install salt-minion
    

    Configure /etc/salt/minion with:

    master: your-master-hostname
    

    Then start the minion:

    sudo systemctl enable --now salt-minion
    

    Accept the minion on the master:

    sudo salt-key -L         # list all keys
    sudo salt-key -A         # accept all pending minion keys
    

    Where to Go Next

    • Salt Project Docs

    • Git-based states with gitfs

    • Masterless setups for container deployments

    • Custom modules in Python

    • Event-driven orchestration with beacons + reactors

    Large 600+ Server Patching in 3 Regions with 3 different Environments Example

    Let give an example of have 3 different environments DEV (Development), PREP (Preproduction), and PROD (Production), now let's dig a little deeper and say we have 3 different regions EUS (East US), WUS (West US), and EUR (European) and we would like these patches to be applied on changing dates, such as DEV will be patched on 3 days after the second Tuesday, PREP will be patched on 5 days after the second Tuesday, and PROD will be 5 days after the 3rd Tuesday. The final clause to this mass configuration is, we would like the patches to be applied on the Client Local Time.

    In many configurations such as AUM, or JetPatch, you would need several different Maintenace Schedules or plans to create this setup. With SALT, the configuration lies inside the minion, so configuration is much more defined, and simple to manage.

    Use Case Recap

    You want to patch three environment groups based on local time and specific schedules:

    Environment

    Schedule Rule

    Timezone

    Dev

    3rd day after 2nd Tuesday of the month

    Local

    PREP

    5th day after 2nd Tuesday of the month

    Local

    Prod

    5th day after 3rd Tuesday of the month

    Local

    Each server knows its environment via Salt grains, and the local timezone via OS or timedatectl.

    Step-by-Step Plan

    1. Set Custom Grains for Environment & Region

    2. Create a Python script (run daily) that:

      • Checks if today matches the schedule per group

      • If yes, uses Salt to target minions with the correct grain and run patching

    3. Schedule this script via cron or Salt scheduler

    4. Use Salt States to define patching

    Step 1: Define Custom Grains

    On each minion, configure /etc/salt/minion.d/env_grains.conf:

    grains:
      environment: dev   # or prep, prod
      region: us-east    # or us-west, eu-central, etc.
    

    Then restart the minion:

    sudo systemctl restart salt-minion
    

    Verify:

    salt '*' grains.items
    

    Step 2: Salt State for Patching

    Create patching/init.sls:

    update-packages:
      pkg.uptodate:
        - refresh: True
        - retry:
            attempts: 3
            interval: 15
    
    reboot-if-needed:
      module.run:
        - name: system.reboot
        - onlyif: 'test -f /var/run/reboot-required'
    

    Step 3: Python Script to Orchestrate Patching

    Let’s build run_patching.py. It:

    • Figures out the correct date for patching

    • Uses salt CLI to run patching for each group

    • Handles each group in its region and timezone

    #!/usr/bin/env python3
    import subprocess
    import datetime
    import pytz
    from dateutil.relativedelta import relativedelta, TU
    
    # Define your environments and their rules
    envs = {
        "dev": {"offset": 3, "week": 2},
        "prep": {"offset": 5, "week": 2},
        "prod": {"offset": 5, "week": 3}
    }
    
    # Map environments to regions (optional)
    regions = {
        "dev": ["us-east", "us-west"],
        "prep": ["us-east", "eu-central"],
        "prod": ["us-east", "us-west", "eu-central"]
    }
    
    # Timezones per region
    region_tz = {
        "us-east": "America/New_York",
        "us-west": "America/Los_Angeles",
        "eu-central": "Europe/Berlin"
    }
    
    def calculate_patch_date(year, month, week, offset):
        second_tuesday = datetime.date(year, month, 1) + relativedelta(weekday=TU(week))
        return second_tuesday + datetime.timedelta(days=offset)
    
    def is_today_patch_day(env, region):
        now = datetime.datetime.now(pytz.timezone(region_tz[region]))
        target_day = calculate_patch_date(now.year, now.month, envs[env]["week"], envs[env]["offset"])
        return now.date() == target_day and now.hour >= desired_hour
    
    def run_salt_target(environment, region):
        target = f"environment:{environment} and region:{region}"
        print(f"Patching {target}...")
        subprocess.run([
            "salt", "-C", target, "state.apply", "patching"
        ])
    
    def main():
        for env in envs:
            for region in regions[env]:
                if is_today_patch_day(env, region):
                    run_salt_target(env, region)
    
    if __name__ == "__main__":
        main()
    

    Make it executable:

    chmod +x /srv/scripts/run_patching.py
    

    Test it:

    ./run_patching.py
    

    Step 4: Schedule via Cron (on Master)

    Edit crontab:

    crontab -e
    

    Add daily job:

    # Run daily at 6 AM UTC
    0 6 * * * /srv/scripts/run_patching.py >> /var/log/salt/patching.log 2>&1
    

    This assumes the local time logic is handled in the script using each region’s timezone.

    Security & Safety Tips

    • Test patching states on a few dev nodes first (salt -G 'environment:dev' -l debug state.apply patching)

    • Add Slack/email notifications (Salt Reactor or Python smtplib)

    • Consider dry-run support with test=True (in pkg.uptodate)

    • Use salt-run jobs.list_jobs to track job execution

    Optional Enhancements

    • Use Salt Beacons + Reactors to monitor and patch in real-time

    • Integrate with JetPatch or Ansible for hybrid control

    • Add patch deferral logic for critical services

    • Write to a central patching log DB with job status per host

    Overall Architecture

    Minions:

    • Monitor the date/time via beacons

    • On patch day (based on local logic), send a custom event to the master

    Master:

    • Reacts to that event via a reactor

    • Targets the sending minion and applies the patching state

    Step-by-Step: Salt Beacon + Reactor Model

    1. Define a Beacon on Each Minion

    File: /etc/salt/minion.d/patchday_beacon.conf

    beacons:
      patchday:
        interval: 3600  # check every hour
    

    This refers to a custom beacon we will define.

    2. Create the Custom Beacon (on all minions)

    File: /srv/salt/_beacons/patchday.py

    import datetime
    from dateutil.relativedelta import relativedelta, TU
    import pytz
    
    __virtualname__ = 'patchday'
    
    def beacon(config):
        ret = []
    
        grains = __grains__
        env = grains.get('environment', 'unknown')
        region = grains.get('region', 'unknown')
    
        # Define rules
        rules = {
            "dev": {"offset": 3, "week": 2},
            "prep": {"offset": 5, "week": 2},
            "prod": {"offset": 5, "week": 3}
        }
    
        region_tz = {
            "us-east": "America/New_York",
            "us-west": "America/Los_Angeles",
            "eu-central": "Europe/Berlin"
        }
    
        if env not in rules or region not in region_tz:
            return ret  # invalid or missing config
    
        tz = pytz.timezone(region_tz[region])
        now = datetime.datetime.now(tz)
        rule = rules[env]
    
        patch_day = (datetime.date(now.year, now.month, 1)
                     + relativedelta(weekday=TU(rule["week"]))
                     + datetime.timedelta(days=rule["offset"]))
    
        if now.date() == patch_day:
            ret.append({
                "tag": "patch/ready",
                "env": env,
                "region": region,
                "datetime": now.isoformat()
            })
    
        return ret
    

    3. Sync Custom Beacon to Minions

    On the master:

    salt '*' saltutil.sync_beacons
    

    Enable it:

    salt '*' beacons.add patchday '{"interval": 3600}'
    

    4. Define Reactor on the Master

    File: /etc/salt/master.d/reactor.conf

    reactor:
      - 'patch/ready':
        - /srv/reactor/start_patch.sls
    

    5. Create Reactor SLS File

    File: /srv/reactor/start_patch.sls

    {% set minion_id = data['id'] %}
    
    run_patching:
      local.state.apply:
        - tgt: {{ minion_id }}
        - arg:
          - patching
    

    This reacts to patch/ready event and applies the patching state to the calling minion.

    6. Testing the Full Flow

    1. Restart the minion: systemctl restart salt-minion

    2. Confirm the beacon is registered: salt '*' beacons.list

    3. Trigger a manual test (simulate patch day by modifying date logic)

    4. Watch events on master:

    salt-run state.event pretty=True
    
    1. Confirm patching applied:

    salt '*' saltutil.running
    

    7. Example: patching/init.sls

    Already shared, but here it is again for completeness:

    update-packages:
      pkg.uptodate:
        - refresh: True
        - retry:
            attempts: 3
            interval: 15
    
    reboot-if-needed:
      module.run:
        - name: system.reboot
        - onlyif: 'test -f /var/run/reboot-required'
    

    Benefits of This Model

    • Real-time and event-driven – no need for polling or external scripts

    • Timezone-aware, thanks to local beacon logic

    • Self-healing – minions signal readiness independently

    • Audit trail – each event is logged in Salt’s event bus

    • Extensible – you can easily add Slack/email alerts via additional reactors

    Goal

    1. Track patching event completions per minion

    2. Store patch event metadata: who patched, when, result, OS, IP, environment, region, etc.

    3. Generate readable reports in:

      • CSV/Excel

      • HTML dashboard

      • JSON for API or SIEM ingestion

    Step 1: Customize Reactor to Log Completion

    Let’s log each successful patch into a central log file or database (like SQLite or MariaDB).

    Update Reactor: /srv/reactor/start_patch.sls

    Add a returner to store job status.

    {% set minion_id = data['id'] %}
    
    run_patching:
      local.state.apply:
        - tgt: {{ minion_id }}
        - arg:
          - patching
        - kwarg:
            returner: local_json  # You can also use 'mysql', 'elasticsearch', etc.
    

    Configure Returner (e.g., local_json)

    In /etc/salt/master:

    returner_dirs:
      - /srv/salt/returners
    
    ext_returners:
      local_json:
        file: /var/log/salt/patch_report.json
    

    Or use a MySQL returner:

    mysql.host: 'localhost'
    mysql.user: 'salt'
    mysql.pass: 'yourpassword'
    mysql.db: 'salt'
    mysql.port: 3306
    

    Enable returners:

    salt-run saltutil.sync_returners
    

    Step 2: Normalize Patch Data (Optional Post-Processor)

    If using JSON log, create a post-processing script to build reports:

    process_patch_log.py

    import json
    import csv
    from datetime import datetime
    
    def load_events(log_file):
        with open(log_file, 'r') as f:
            return [json.loads(line) for line in f if line.strip()]
    
    def export_csv(events, out_file):
        with open(out_file, 'w', newline='') as f:
            writer = csv.DictWriter(f, fieldnames=[
                'minion', 'date', 'environment', 'region', 'result'
            ])
            writer.writeheader()
            for e in events:
                writer.writerow({
                    'minion': e['id'],
                    'date': datetime.fromtimestamp(e['_stamp']).isoformat(),
                    'environment': e['return'].get('grains', {}).get('environment', 'unknown'),
                    'region': e['return'].get('grains', {}).get('region', 'unknown'),
                    'result': 'success' if e['success'] else 'failure'
                })
    
    events = load_events('/var/log/salt/patch_report.json')
    export_csv(events, '/srv/reports/patching_report.csv')
    

    Step 3: Build a Simple Web Dashboard

    If you want to display reports via a browser:

    🛠 Tools:

    • Flask or FastAPI

    • Bootstrap or Chart.js

    • Reads JSON/CSV and renders:

    Example Chart Dashboard Features:

    • Last patch date per server

    • 📍 Patching success rate per region/env

    • 🔴 Highlight failed patching

    • 📆 Monthly compliance timeline

    Would you like a working example of that Flask dashboard? I can include the full codebase if so.

    Step 4: Send Reports via Email (Optional)

    🐍 Python: send_report_email.py

    import smtplib
    from email.message import EmailMessage
    
    msg = EmailMessage()
    msg["Subject"] = "Monthly Patch Report"
    msg["From"] = "patchbot@example.com"
    msg["To"] = "it-lead@example.com"
    msg.set_content("Attached is the patch compliance report.")
    
    with open("/srv/reports/patching_report.csv", "rb") as f:
        msg.add_attachment(f.read(), maintype="text", subtype="csv", filename="patching_report.csv")
    
    with smtplib.SMTP("localhost") as s:
        s.send_message(msg)
    

    Schedule that weekly or monthly with cron.

    Flask Dashboard (Patch Reporting)

    app.py

    from flask import Flask, render_template
    import csv
    from collections import defaultdict
    
    app = Flask(__name__)
    
    @app.route('/')
    def index():
        results = []
        success_count = defaultdict(int)
        fail_count = defaultdict(int)
    
        with open('/srv/reports/patching_report.csv', 'r') as f:
            reader = csv.DictReader(f)
            for row in reader:
                results.append(row)
                key = f"{row['environment']} - {row['region']}"
                if row['result'] == 'success':
                    success_count[key] += 1
                else:
                    fail_count[key] += 1
    
        summary = [
            {"group": k, "success": success_count[k], "fail": fail_count[k]}
            for k in sorted(set(success_count) | set(fail_count))
        ]
    
        return render_template('dashboard.html', results=results, summary=summary)
    
    if __name__ == '__main__':
        app.run(debug=True, host='0.0.0.0', port=5000)
    

    templates/dashboard.html

    <!DOCTYPE html>
    <html>
    <head>
      <title>Patch Compliance Dashboard</title>
      <style>
        body { font-family: Arial; padding: 20px; }
        table { border-collapse: collapse; width: 100%; margin-bottom: 30px; }
        th, td { border: 1px solid #ccc; padding: 8px; text-align: left; }
        th { background-color: #f4f4f4; }
        .fail { background-color: #fdd; }
        .success { background-color: #dfd; }
      </style>
    </head>
    <body>
      <h1>Patch Compliance Dashboard</h1>
    
      <h2>Summary</h2>
      <table>
        <tr><th>Group</th><th>Success</th><th>Failure</th></tr>
        {% for row in summary %}
        <tr>
          <td>{{ row.group }}</td>
          <td>{{ row.success }}</td>
          <td>{{ row.fail }}</td>
        </tr>
        {% endfor %}
      </table>
    
      <h2>Detailed Results</h2>
      <table>
        <tr><th>Minion</th><th>Date</th><th>Environment</th><th>Region</th><th>Result</th></tr>
        {% for row in results %}
        <tr class="{{ row.result }}">
          <td>{{ row.minion }}</td>
          <td>{{ row.date }}</td>
          <td>{{ row.environment }}</td>
          <td>{{ row.region }}</td>
          <td>{{ row.result }}</td>
        </tr>
        {% endfor %}
      </table>
    </body>
    </html>
    

    How to Use

    pip install flask
    python app.py
    

    Then visit http://localhost:5000 or your server’s IP at port 5000.

    Optional: SIEM/Event Forwarding

    If you use Elasticsearch, Splunk, or Mezmo:

    • Use a returner like es_return, splunk_return, or send via custom script using REST API.

    • Normalize fields: hostname, env, os, patch time, result

    • Filter dashboards by compliance groupings

    TL;DR: Reporting Components Checklist

    Component

    Purpose

    Tool

    JSON/DB logging

    Track patch status

    Returners

    Post-processing script

    Normalize data for business

    Python

    CSV/Excel export

    Shareable report format

    Python csv module

    HTML dashboard

    Visualize trends/compliance

    Flask, Chart.js, Bootstrap

    Email automation

    Notify stakeholders

    smtplib, cron

    SIEM/Splunk integration

    Enterprise log ingestion

    REST API or native returners

    Recent Entries

  3. F.O.S.S

    A blog by Blogger in CodeName Blogs
    • 67 Entries
    • 0 Comments
    • 2569 Views
    Blogger
    by: Sreenath
    Wed, 23 Apr 2025 03:05:46 GMT


    Customize Logseq With Themes and Plugins

    Logseq provides all the necessary elements you need for creating your knowledge base.

    But one size doesn't fit all. You may need something extra that is either too complicated to achieve in Logseq or not possible at all.

    What do you do, then? You use external plugins and extensions.

    Thankfully, Logseq has a thriving marketplace where you can explore various plugins and extensions created by individuals who craved more from Logseq,

    Let me show you how you can install themes and plugins.

    🚧
    Privacy alert! Do note that plugins can access your graph and local files. You'll see this warning in Logseq as well. More granular permission control system is not yet available at the moment.

    Installing a plugin in Logseq

    Click on the top-bar menu button and select Plugins as shown in the screenshot below.

    Customize Logseq With Themes and Plugins
    Menu → Plugins

    In the Plugins window, click on Marketplace.

    Customize Logseq With Themes and Plugins
    Click on Marketplace tab

    This will open the Logseq Plugins Marketplace. You can click on the title of a plugin to get the details about that plugin, including a sample screenshot.

    Customize Logseq With Themes and Plugins
    Click on Plugin Title

    If you find the plugin useful, use the Install button adjacent to the Plugin in the Marketplace section.

    Customize Logseq With Themes and Plugins
    Install a Plugin

    Managing Plugins

    To manage a plugin, like enable/disable, fine-tune, etc., go to Menu → Plugins. This will take you to the Manage Plugin interface.

    📋
    If you are on the Marketplace, just use the Installed tab to get all the installed plugins.
    Customize Logseq With Themes and Plugins
    Installed plugins section

    Here, you can enable/disable plugins in Logseq using the corresponding toggle button. Similarly, hover over the settings gear icon for a plugin and select Open Settings option to access plugin configuration.

    Customize Logseq With Themes and Plugins
    Click on Plugin settings gear icon

    Installing themes in Logseq

    Logseq looks good by default to me but you can surely experiment with its looks by installing new themes.

    Similar to what you saw in plugin installation section, click on the Plugins option from Logseq menu button.

    Customize Logseq With Themes and Plugins
    Click on Menu → Plugins

    Why did I not click the Themes option above? Well, because that is for switching themes, not installing.

    In the Plugins window, click on Marketplace section and select Themes.

    Customize Logseq With Themes and Plugins
    Select Marketplace → Themes

    Click on the title of a theme to get the details, including screenshots.

    Customize Logseq With Themes and Plugins
    Logseq theme details page

    To install a theme, use the Install button adjacent to the theme in Marketplace.

    Customize Logseq With Themes and Plugins
    Click Install to install the theme

    Enable/disable themes in Logseq

    🚧
    Changing themes is not done in this window. Theme switching will be discussed below.

    All the installed themes will be listed in Menu → Plugins → Installed → Themes section.

    Customize Logseq With Themes and Plugins
    Installed themes listed

    From here, you can disable/enable themes using the toggle button.

    Changing themes

    Make sure all the desired installed themes are enabled because disabled themes won't be shown in the theme switcher.

    Click on the main menu button and select the Themes option.

    Customize Logseq With Themes and Plugins
    Click on Menu → Themes

    This will bring a drop-down menu interface from where you can select a theme. This is shown in the short video below.

    Updating plugins and themes

    Occasionally, plugins and themes will provide updates.

    To check for available plugin/theme updates, click on Menu → Plugins.

    Here, select the Installed section to access installed Themes and Plugins. There should be a Check for Update button for each item.

    Customize Logseq With Themes and Plugins
    Click on Check Update

    Click on it to check if any updates are available for the selected plugin/theme.

    Uninstall plugins and themes

    By now you know that in Logseq, both Plugins and themes are considered as plugins. So, you can uninstall both in the same way.

    First, click on Menu button and select the Plugins option.

    Customize Logseq With Themes and Plugins
    Click on the Menu and select Plugins

    Here, go to the Installed section. Now, if you want to remove an installed Plugin, go to the Plugins tab. Else, if you would like to remove an installed theme, go to the Themes tab.

    Customize Logseq With Themes and Plugins
    Select Plugins or Themes Section

    Hover over the settings gear of the item that needs to be removed and select the Uninstall button.

    Customize Logseq With Themes and Plugins
    Uninstall a Plugin or Theme

    When prompted for confirmation, click on Yes, and the plugin/theme will be removed.

    Manage plugins from Logseq settings

    Logseq settings provides a neat place for tweaking the installed Plugins and themes if they provide some extra settings.

    Click on the menu button on the top-bar and select the Settings button.

    Customize Logseq With Themes and Plugins
    Click on Menu → Settings

    In the settings window, click on Plugins section.

    Customize Logseq With Themes and Plugins
    Click on Plugins Section in Settings

    Here, you can get a list of plugins and themes that offer some tweaks.

    Customize Logseq With Themes and Plugins
    Plugin settings in Logseq Settings window

    And that's all you need to know about exploring plugins and themes in Logseq. In the next tutorial in this series, I'll discuss special pages like Journal. Stay tuned.

    Recent Entries

  4. Programmer's Corner

    A blog by Blogger in CodeName Blogs
    • 144 Entries
    • 0 Comments
    • 1255 Views
    by: Chris Coyier
    Mon, 21 Apr 2025 17:10:35 +0000


    I enjoyed Trys Mudford’s explanation of making rounded triangular boxes. It was a very real-world client need, and I do tend to prefer reading about technical solutions to real problems over theoretical ones. This one was tricky because this particular shape doesn’t have a terribly obvious way to draw it on the web.

    CSS’ clip-path is useful, but the final rounding was done with an unintuitive feGaussianBlur SVG filter. You could draw it all in SVG, but I think the % values you get to use with clip-path are a more natural fit to web content than pure SVG is. SVG just wasn’t born in a responsive web design world.

    The thing is: SVG has a viewBox which is a fixed coordinate system on which you draw things. The final SVG can be scaled and squished and stuff, but it’s all happening on this fixed grid.

    I remember when trying to learn the <path d=""> syntax in SVG how it’s almost an entire language unto itself, with lots of different letters issues commands to a virtual pen. For example:

    • M 100,100 means “Pick up the pen and move it to the exact coordinates 100,100”
    • m 100,100 means “Move the Pen 100 down and 100 right from wherever you currently are.”

    That syntax for the d attribute (also expressed with the path() function) can be applied in CSS, but I always thought that was very weird. The numbers are “unitless” in SVG, and that makes sense because the numbers apply to that invisible fixed grid put in place by the viewBox. But there is no viewBox in regular web layout., so those unitless numbers are translated to px, and px also isn’t particularly responsive web design friendly.

    This was my mind’s context when I saw the Safari 18.4 new features. One of them being a new shape() function:

    For complex graphical effects like clipping an image or video to a shape, authors often fall back on CSS masking so that they can ensure that the mask adapts to the size and aspect ratio of the content being masked. Using the clip-path property was problematic, because the only way to specify a complex shape was with the path() function, which takes an SVG-style path, and the values in these paths don’t have units; they are just treated as CSS pixels. It was impossible to specify a path that was responsive to the element being clipped.

    Yes! I’m glad they get it. I felt like I was going crazy when I would talk about this issue and get met with blank stares.

    Tyrs got so close with clip-path: polygon() alone on those rounded arrow shapes. The % values work nicely for random amounts of content inside (e.g. the “nose” should be at 50% of the height) and if the shape of the arrow needed to be maintained px values could be mix-and-matched in there.

    But the rounding was missing. There is no rounding with polygon().

    Or so I thought? I was on the draft spec anyway looking at shape(), which we’ll circle back to, but it does define the same round keyword and provide geometric diagrams with expectations on how it’s implemented.

    An optional <length> after a round keyword defines rounding for each vertex of the polygon. 

    There are no code examples, but I think it would look something like this:

    /* might work one day? */
    clip-path: polygon(0% 0% round 0%, 75% 0% round 10px, 100% 50% round 10px, 75% 100% round 10px, 0% 100% round 0%);

    I’d say “draft specs are just… draft specs”, but stable Safari is shipping with stuff in this draft spec so I don’t know how all that works. I did test this syntax across the browsers and nothing supports it. If it did, Trys’ work would have been quite a bit easier. Although the examples in that post where a border follows the curved paths… that’s still hard. Maybe we need clip-path-border?

    There is precedent for rounding in “basic shape” functions already. The inset() function has a round keyword which produces a rounded rectangle (think a simple border-radius). See this example, which actually does work.

    But anyway: that new shape() function. It looks like it is trying to replicate (the entire?) power of <path d=""> but do it with a more CSS friendly/native syntax. I’ll post the current syntax from the spec to help paint the picture it’s a whole new language (🫥):

    <shape-command> = <move-command> | <line-command> | close |
                      <horizontal-line-command> | <vertical-line-command> | 
                      <curve-command> | <smooth-command> | <arc-command>
    
    <move-command> = move <command-end-point>
    <line-command> = line <command-end-point>
    <horizontal-line-command> = hline
            [ to [ <length-percentage> | left | center | right | x-start | x-end ]
            | by <length-percentage> ]
    <vertical-line-command> = vline
            [ to [ <length-percentage> | top | center | bottom | y-start | y-end ]
            | by <length-percentage> ]
    <curve-command> = curve
            [ [ to <position> with <control-point> [ / <control-point> ]? ]
            | [ by <coordinate-pair> with <relative-control-point> [ / <relative-control-point> ]? ] ]
    <smooth-command> = smooth
            [ [ to <position> [ with <control-point> ]? ]
            | [ by <coordinate-pair> [ with <relative-control-point> ]? ] ]
    <arc-command> = arc <command-end-point>
                [ [ of <length-percentage>{1,2} ]
                  && <arc-sweep>? && <arc-size>? && [rotate <angle>]? ]
    
    <command-end-point> = [ to <position> | by <coordinate-pair> ]
    <control-point> = [ <position> | <relative-control-point> ]
    <relative-control-point> = <coordinate-pair> [ from [ start | end | origin ] ]?
    <coordinate-pair> = <length-percentage>{2}
    <arc-sweep> = cw | ccw
    <arc-size> = large | small

    So instead of somewhat obtuse single-letter commands in the path syntax, these have more understandable names. Here’s an example again from the spec that draws a speech bubble shape:

    .bubble { 
      clip-path: 
        shape(
          from 5px 0,
          hline to calc(100% - 5px),
          curve to right 5px with right top,
          vline to calc(100% - 8px),
          curve to calc(100% - 5px) calc(100% - 3px) with right calc(100% - 3px),
          hline to 70%,
          line by -2px 3px,
          line by -2px -3px,
          hline to 5px,
          curve to left calc(100% - 8px) with left calc(100% - 3px),
          vline to 5px,
          curve to 5px top with left top
         );
    }

    You can see the rounded corners being drawn there with literal curve commands. I think it’s neat. So again Trys’ shapes could be drawn with this once it has more proper browser support. I love how with this syntax we can mix and match units, we could abstract them out with custom properties, we could animate them, they accept readable position keywords like “right”, we can use calc(), and all this really nice native CSS stuff that path() wasn’t able to give us. This is born in a responsive web design world.

    Very nice win, web platform.

    Recent Entries

  5. Linux Tips

    A blog by Blogger in CodeName Blogs
    • 56 Entries
    • 0 Comments
    • 805 Views
    by: LHB Community
    Sun, 20 Apr 2025 12:23:45 +0530


    As a developer, efficiency is key. Being a full-stack developer myself, I’ve always thought of replacing boring tasks with automation.

    What could happen if I just keep writing new code in a Python file, and it gets evaluated every time I save it? Isn’t that a productivity boost?

    'Hot Reload' is that valuable feature of the modern development process that automatically reloads or refreshes the code after you make changes to a file. This helps the developers see the effect of their changes instantly and avoid manually restarting or refreshing the browser.

    Over these years, I’ve used tools like entr to keep docker containers on the sync every time I modify docker-compose.yml file or keep testing with different CSS designs on the fly with browser-sync

    1. entr

    entr (Event Notify Test Runner) is a lightweight command line tool for monitoring file changes and triggering specified commands. It’s one of my favorite tools to restart any CLI process, whether it be triggering a docker build or restarting a python script or keep rebuilding the C project.

    For developers who are used to the command line, entr provides a simple and efficient way to perform tasks such as building, testing, or restarting services in real time.

    Key Features

    • Lightweight, no additional dependencies.
    • Highly customizable
    • Ideal for use in conjunction with scripts or build tools.
    • Linux only.

    Installation

    All you have to do is type in the following command in the terminal:

    sudo apt install -y entr

    Usage

    Auto-trigger build tools: Use entr to automatically execute build commands like make, webpack, etc. Here's the command I use to do that:

    ls docker-compose.yml | entr -r docker build

    Here, -r flag reloads the child process, which is the run command ‘docker build’.

    0:00
    /0:23

    Automatically run tests: Automatically re-run unit tests or integration tests after modifying the code.

    ls *.ts | entr bun test
    entr usage

    2. nodemon

    nodemon is an essential tool for developers working on Node.js applications. It automatically monitors changes to project files and restarts the Node.js server when files are modified, eliminating the need for developers from restarting the server manually.

    Key Features

    • Monitor file changes and restart Node.js server automatically.
    • Supports JavaScript and TypeScript projects
    • Customize which files and directories to monitor.
    • Supports common web frameworks such as Express, Hapi.

    Installation

    You can type in a single command in the terminal to install the tool:

    npm install -g nodemon

    If you are installing Node.js and npm for the first on Ubuntu-based distributions. You can follow our Node.js installation tutorial.

    Usage

    When you type in the following command, it starts server.js and will automatically restart the server if the file changes.

    nodemon server.js
    nodemon

    3. LiveReload.net

    LiveReload.net is a very popular tool, especially for front-end developers. It automatically refreshes the browser after you save a file, helping developers see the effect of changes immediately, eliminating the need to manually refresh the browser.

    Unlike others, it is a web–based tool, and you need to head to its official website to get started. Every file remains in your local network. No files are uploaded to a third-party server.

    Key Features

    • Seamless integration with editors
    • Supports custom trigger conditions to refresh the page
    • Good compatibility with front-end frameworks and static websites.

    Usage

    livereload

    It's stupidly simple. Just load up the website, and drag and drop your folder to start making live changes. 

    4. fswatch

    fswatch is a cross-platform file change monitoring tool for Linux, macOS, and developers using it on Windows via WSL (Windows Subsystem Linux). It is powerful enough to monitor multiple files and directories for changes and perform actions accordingly.

    Key Features

    • Supports cross-platform operation and can be used on Linux and macOS.
    • It can be used with custom scripts to trigger multiple operations.
    • Flexible configuration options to filter specific types of file changes.

    Installation

    To install it on a Linux distribution, type in the following in the terminal:

    sudo apt install -y fswatch

    If you have a macOS computer, you can use the command:

    brew install fswatch

    Usage

    You can try typing in the command here:

    fswatch -o . | xargs -n1 -I{} make
    fswatch

    And, then you can chain this command with an entr command for a rich interactive development experience.

    ls hellomake | entr -r ./hellomake

    The “fswatch” command will invoke make to compile the c application, and then if our binary “hellomake” is modified, we’ll run it again. Isn’t this a time saver? 

    5. Watchexec

    Watchexec is a cross-platform command line tool for automating the execution of specified commands when a file or directory changes. It is a lightweight file monitor that helps developers automate tasks such as running tests, compiling code, or reloading services when a source code file changes. 

      Key Features

    • Support cross-platform use (macOS, Linux, Windows).
    • Fast, written in Rust.
    • Lightweight, no complex configuration.

    Installation

    On Linux, just type in:

    sudo apt install watchexec

    And, if you want to try it on macOS (via homebrew):

    brew install watchexec

    You can also download corresponding binaries for your system from the project’s Github releases section.

    Usage

    All you need to do is just run the command:

    watchexec -e py "pytest"

    This will run pytests every time a Python file in the current directory is modified.

    6. BrowserSync

    BrowserSync is a powerful tool that not only monitors file changes, but also synchronizes pages across multiple devices and browsers. BrowserSync can be ideal for developers who need to perform cross-device testing.

    Key features

    • Cross-browser synchronization.
    • Automatically refreshes multiple devices and browsers.
    • Built-in local development server.

    Installation

    Considering you have Node.js installed first, type in the following command:

    npm i -g browser-sync

    Or, you can use:

    npx browser-sync

    Usage

    Here is how the commands for it would look like:

    browser-sync start --server --files "/*.css, *.js, *.html"
    npx browser-sync start --server --files "/*.css, *.js, *.html"

    You can use either of the two commands for your experiments.

    browsersync

    This command starts a local server and monitors the CSS, JS, and HTML files for changes, and the browser is automatically refreshed as soon as a change occurs. If you’re a developer and aren't using any modern frontend framework, this comes handy.

    7. watchdog & watchmedo

    Watchdog is a file system monitoring library written in Python that allows you to monitor file and directory changes in real time. Whether it's file creation, modification, deletion, or file move, Watchdog can help you catch these events and trigger the appropriate action.

    Key Features

    • Cross-platform support
    • Provides full flexibility with its Python-based API
    • Includes watchmedo script to hook any CLI application easily

    Installation

    Install Python first, and then install with pip using the command below:

    pip install watchdog

    Usage

    Type in the following and watch it in action:

    watchmedo shell-command --patterns="*.py" --recursive --command="python factorial.py" .
    watchdog

    This command watches a directory for file changes and prints out the event details whenever a file is modified, created, or deleted.

    In the command, --patterns="*.py" watches .py files, --recursive watches subdirectories and --command="python factorial.py" run the python file.

    Conclusion

    Hot reloading tools have become increasingly important in the development process, and they can help developers save a lot of time and effort and increase productivity. With tools like entr, nodemon, LiveReload, Watchexec, Browser Sync, and others, you can easily automate reloading and live feedback without having to manually restart the server or refresh the browser.

    Integrating these tools into your development process can drastically reduce repetitive work and waiting time, allowing you to focus on writing high-quality code.

    Whether you're developing a front-end application or a back-end service or managing a complex project, using these hot-reloading tools will enhance your productivity.

    CTA Image

    Bhuwan Mishra is a Fullstack developer, with Python and Go as his tools of choice. He takes pride in building and securing web applications, APIs, and CI/CD pipelines, as well as tuning servers for optimal performance. He also has a passion for working with Kubernetes.

    Recent Entries

  6. Opinion Blogs

    • 0 Entries
    • 0 Comments
    • 97 Views

    No blog entries yet

Important Information

Terms of Use Privacy Policy Guidelines We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.