Author: Luka Paunović

  • Enterprise-Grade Ubuntu 22.04 Installation on Software RAID 0 with UEFI Boot

    Enterprise-Grade Ubuntu 22.04 Installation on Software RAID 0 with UEFI Boot

    Setting up a high-performance Linux environment on enterprise hardware isn’t just about installing an OS – it’s about precision, flexibility, and reliability. In this post, I’ll walk you through a real-world case study where I deployed Ubuntu 22.04 LTS on a ProLiant DL360 Gen9 server using mdadm-based software RAID 0 and a fully optimized UEFI boot setup.

    The Challenge

    My client needed a performant server for hosting CyberPanel and other web services. The server was equipped with 8x 4TB drives and required a RAID 0 configuration for maximum throughput. It also needed UEFI boot support and a fully customized minimal Ubuntu system for performance and control.

    However, I ran into multiple real-world obstacles:

    • Default Ubuntu installer doesn’t support RAID 0 configuration natively.
    • UEFI boot and GRUB issues on software RAID.
    • Network not coming up post-installation.
    • Emergency mode boot loops due to missing base packages

    The Solution (Step-by-Step)

    Note: Use rescue system or Live CD

    Step 1: Disk Partitioning

    Each of the 8 drives was partitioned identically using parted:

    • /dev/sdX1: 512MB FAT32 for EFI (boot, esp flags)
    • /dev/sdX2: 1MB BIOS boot partition (bios_grub flag)
    • /dev/sdX3: remaining space for RAID
    for disk in /dev/sd{a..h}; do
      parted -s $disk mklabel gpt
      parted -s $disk mkpart primary fat32 1MiB 513MiB
      parted -s $disk set 1 esp on
      parted -s $disk mkpart primary 513MiB 514MiB
      parted -s $disk set 2 bios_grub on
      parted -s $disk mkpart primary ext4 514MiB 100%
    done

    Step 2: RAID Creation

    Create RAID 0 using all /dev/sdX3 partitions:

    mdadm --create /dev/md0 --level=0 --raid-devices=8 /dev/sd{a..h}3

    Step 3: Filesystem Setup

    mkfs.ext4 /dev/md0
    mount /dev/md0 /mnt
    mkfs.vfat -F32 /dev/sda1
    mkdir -p /mnt/boot/efi
    mount /dev/sda1 /mnt/boot/efi

    Step 4: Base Ubuntu Installation (debootstrap)

    apt update
    apt install debootstrap -y
    debootstrap jammy /mnt http://archive.ubuntu.com/ubuntu

    Step 5: Mount and Chroot

    mount --bind /dev /mnt/dev
    mount --bind /proc /mnt/proc
    mount --bind /sys /mnt/sys
    mount --bind /run /mnt/run
    chroot /mnt

    Step 6: Essential Packages

    apt update
    apt install ubuntu-standard ubuntu-minimal systemd-sysv grub-efi grub-efi-amd64 shim mdadm net-tools ifupdown isc-dhcp-client -y

    Step 7: Configure RAID and FSTAB

    echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    update-initramfs -u -k all

    Example /etc/fstab:

    blkid # Use to populate fstab with correct UUIDs
    UUID=xxxxx-root  /          ext4 defaults  0 1
    UUID=xxxxx-efi   /boot/efi  vfat defaults  0 1

    Step 8: GRUB Installation

    grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=ubuntu
    grub-install /dev/sd{b..h} # for BIOS fallback (optional)
    update-grub

    Step 9: Create UEFI Boot Entry

    efibootmgr --create --disk /dev/sda --part 1 --label "Ubuntu" --loader '\EFI\ubuntu\grubx64.efi'

    Step 10: Network Configuration

    echo 'auto lo
    iface lo inet loopback
    
    auto eno49
    iface eno49 inet static
      address 178.222.247.237
      netmask 255.255.255.128
      gateway 178.222.247.1
      dns-nameservers 1.1.1.1 8.8.8.8' > /etc/network/interfaces

    The system now boots cleanly from /dev/md0, detects the RAID volume immediately via initramfs, and launches a fully minimal but extendable Ubuntu 22.04 environment ready for CyberPanel.

    Boot time was optimized, all legacy EFI entries were removed, and the network stack is stable and persistent across reboots.

  • How to install chkrootkit (AlmaLinux, CentOS, RHEL)

    How to install chkrootkit (AlmaLinux, CentOS, RHEL)

    Run the following script:

    bash <(curl -s https://raw.githubusercontent.com/lukapaunovic/chkrootkit-install-script/refs/heads/master/install.sh)

    To run a server check:

    /usr/local/bin/chkrootkit | grep -v -E "not(hing)? (infected|found|tested)"

    Official GitHub Installer Repository

  • Advanced ADSL Stability Script: Real-Time CRC Monitoring and Auto-Recovery

    Advanced ADSL Stability Script: Real-Time CRC Monitoring and Auto-Recovery

    For TP-LINK ADSL modems.

    If you’re stuck with an ADSL connection, you’ve probably experienced random disconnections, unstable speeds, or the dreaded CRC error spikes that cripple your internet. The worst part? These issues often happen while the connection is technically “up,” but completely unusable.

    Instead of manually restarting the modem every time the connection gets unstable, I decided to automate the entire process using a lightweight Bash + Expect script. Now, my modem resets itself when things start to go wrong — and my internet remains stable without me lifting a finger.


    What Exactly Did I Build?

    • A script that monitors CRC errors in real time by connecting to the modem over Telnet.
    • When the number of CRC errors exceeds a configurable threshold (e.g., 100 errors), the script automatically sends a reset command to the modem.
    • All actions are logged with timestamps for easy review.
    • The script runs continuously and checks the connection every few seconds.

    Why Is This Important?

    • CRC errors on ADSL lines often don’t increase gradually.
    • Instead, the error count can explode into the thousands within minutes, leading to massive packet loss and unusable internet — while your modem still shows as “connected.”
    • ISPs usually lock your line speed and don’t dynamically adjust it when signal quality degrades.
    • Manual resets become a constant annoyance… unless you automate them.

    How Does It Work?

    The script uses expect to reliably automate Telnet sessions with the modem. It:

    1. Logs into the modem using Telnet.
    2. Retrieves the current CRC error count.
    3. If the error count crosses the defined limit, sends the wan adsl reset command.
    4. Logs every reset with a timestamp.

    The Script

    #!/bin/bash
    
    MODEM_IP="192.168.1.1"
    PASSWORD="admin"
    CRC_LIMIT=100
    CHECK_INTERVAL=5
    
    check_crc() {
        expect -c "
            set timeout 10
            spawn telnet $MODEM_IP
            expect \"Password:\" { send \"$PASSWORD\r\" }
            expect \">\" { send \"wan adsl perfdata\r\" }
            expect \">\" { send \"exit\r\" }
            expect eof
        " | grep -i "CRC" | head -n 1 | grep -oE '[0-9]+'
    }
    
    reset_adsl() {
        expect -c "
            set timeout 10
            spawn telnet $MODEM_IP
            expect \"Password:\" { send \"$PASSWORD\r\" }
            expect \">\" { send \"wan adsl reset\r\" }
            expect \">\" { send \"exit\r\" }
            expect eof
        " > /dev/null 2>&1
        echo "$(date): ADSL connection reset due to CRC errors!" >> /var/log/adsl_watchdog.log
    }
    
    echo "Started ADSL Watchdog ..."
    
    while true; do
        crc_count=$(check_crc)
        crc_count=${crc_count:-0}
    
        echo "$(date): Trenutni broj CRC grešaka: $crc_count"
    
        if [ "$crc_count" -gt "$CRC_LIMIT" ]; then
            echo "$(date): CRC Threshold reached ($CRC_LIMIT). Resetting connection..."
            reset_adsl
            sleep 60
        else
            sleep $CHECK_INTERVAL
        fi
    done

    I recommend running this on Raspberry pi 🙂

  • Complete Guide: Setting Up Varnish Cache for WordPress with NGINX

    Complete Guide: Setting Up Varnish Cache for WordPress with NGINX

    Using Varnish Cache in front of a WordPress site can drastically improve performance, reduce server load, and serve content lightning fast — especially when paired with NGINX and Cloudflare. Below is a complete and production-ready setup, including all configuration files and best practices.

    Requirements

    • A Linux server (Ubuntu/Debian/CentOS)
    • WordPress installed and working
    • NGINX (used as SSL terminator + PHP backend)
    • PHP-FPM (for dynamic content)
    • Varnish installed
    • Cloudflare (optional but supported)

    1. Varnish Configuration (default.vcl)

    Location: usually in /etc/varnish/default.vcl
    Varnish Port: here we use :6081 for external and :6216 for internal access

    vcl 4.0;
    
    import std;
    import proxy;
    
    
    # Define individual backends
    backend default {
        .host = "127.0.0.1";
        .port = "6216";  # Use 80 after SSL termination
    
    }
    
    # Add hostnames, IP addresses and subnets that are allowed to purge content
    acl purge {
        "YOUR_SERVER_IP";
        "173.245.48.0"/20; # Cloudflare IPs
        "103.21.244.0"/22;  # Cloudflare IPs
        "103.22.200.0"/22;  # Cloudflare IPs
        "103.31.4.0"/22;        # Cloudflare IPs        
        "141.101.64.0"/18;          # Cloudflare IPs
        "108.162.192.0"/18;     # Cloudflare IPs
        "190.93.240.0"/20;      # Cloudflare IPs
        "188.114.96.0"/20;          # Cloudflare IPs
        "197.234.240.0"/22;     # Cloudflare IPs
        "198.41.128.0"/17;      # Cloudflare IPs
        "162.158.0.0"/15;       # Cloudflare IPs
        "104.16.0.0"/13;        # Cloudflare IPs
        "104.24.0.0"/14;        # Cloudflare IPs
        "172.64.0.0"/13;        # Cloudflare IPs
        "131.0.72.0"/22;        # Cloudflare IPs
    }
    
    sub vcl_recv {
        set req.backend_hint = default;
    
        # Remove empty query string parameters
        # e.g.: www.example.com/index.html?
        if (req.url ~ "\?$") {
            set req.url = regsub(req.url, "\?$", "");
        }
    
        # Remove port number from host header
        set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
    
        # Sorts query string parameters alphabetically for cache normalization purposes
        set req.url = std.querysort(req.url);
    
        # Remove the proxy header to mitigate the httpoxy vulnerability
        # See https://httpoxy.org/
        unset req.http.proxy;
    
        # Add X-Forwarded-Proto header when using https
        if (!req.http.X-Forwarded-Proto) {
            if(std.port(server.ip) == 443 || std.port(server.ip) == 8443) {
                set req.http.X-Forwarded-Proto = "https";
            } else {
                set req.http.X-Forwarded-Proto = "http";
            }
        }
    
        # Purge logic to remove objects from the cache.
        # Tailored to the Proxy Cache Purge WordPress plugin
        # See https://wordpress.org/plugins/varnish-http-purge/
        if(req.method == "PURGE") {
            if(!client.ip ~ purge) {
                return(synth(405,"PURGE not allowed for this IP address"));
            }
            if (req.http.X-Purge-Method == "regex") {
                ban("obj.http.x-url ~ " + req.url + " && obj.http.x-host == " + req.http.host);
                return(synth(200, "Purged"));
            }
            ban("obj.http.x-url == " + req.url + " && obj.http.x-host == " + req.http.host);
            return(synth(200, "Purged"));
        }
    
        # Only handle relevant HTTP request methods
        if (
            req.method != "GET" &&
            req.method != "HEAD" &&
            req.method != "PUT" &&
            req.method != "POST" &&
            req.method != "PATCH" &&
            req.method != "TRACE" &&
            req.method != "OPTIONS" &&
            req.method != "DELETE"
        ) {
            return (pipe);
        }
    
        # Remove tracking query string parameters used by analytics tools
        if (req.url ~ "(\?|&)(_branch_match_id|_bta_[a-z]+|_bta_c|_bta_tid|_ga|_gl|_ke|_kx|campid|cof|customid|cx|dclid|dm_i|ef_id|epik|fbclid|gad_source|gbraid|gclid|gclsrc|gdffi|gdfms|gdftrk|hsa_acc|hsa_ad|hsa_cam|hsa_grp|hsa_kw|hsa_mt|hsa_net|hsa_src|hsa_tgt|hsa_ver|ie|igshid|irclickid|matomo_campaign|matomo_cid|matomo_content|matomo_group|matomo_keyword|matomo_medium|matomo_placement|matomo_source|mc_[a-z]+|mc_cid|mc_eid|mkcid|mkevt|mkrid|mkwid|msclkid|mtm_campaign|mtm_cid|mtm_content|mtm_group|mtm_keyword|mtm_medium|mtm_placement|mtm_source|nb_klid|ndclid|origin|pcrid|piwik_campaign|piwik_keyword|piwik_kwd|pk_campaign|pk_keyword|pk_kwd|redirect_log_mongo_id|redirect_mongo_id|rtid|s_kwcid|sb_referer_host|sccid|si|siteurl|sms_click|sms_source|sms_uph|srsltid|toolid|trk_contact|trk_module|trk_msg|trk_sid|ttclid|twclid|utm_[a-z]+|utm_campaign|utm_content|utm_creative_format|utm_id|utm_marketing_tactic|utm_medium|utm_source|utm_source_platform|utm_term|vmcid|wbraid|yclid|zanpid)=") {
            set req.url = regsuball(req.url, "(_branch_match_id|_bta_[a-z]+|_bta_c|_bta_tid|_ga|_gl|_ke|_kx|campid|cof|customid|cx|dclid|dm_i|ef_id|epik|fbclid|gad_source|gbraid|gclid|gclsrc|gdffi|gdfms|gdftrk|hsa_acc|hsa_ad|hsa_cam|hsa_grp|hsa_kw|hsa_mt|hsa_net|hsa_src|hsa_tgt|hsa_ver|ie|igshid|irclickid|matomo_campaign|matomo_cid|matomo_content|matomo_group|matomo_keyword|matomo_medium|matomo_placement|matomo_source|mc_[a-z]+|mc_cid|mc_eid|mkcid|mkevt|mkrid|mkwid|msclkid|mtm_campaign|mtm_cid|mtm_content|mtm_group|mtm_keyword|mtm_medium|mtm_placement|mtm_source|nb_klid|ndclid|origin|pcrid|piwik_campaign|piwik_keyword|piwik_kwd|pk_campaign|pk_keyword|pk_kwd|redirect_log_mongo_id|redirect_mongo_id|rtid|s_kwcid|sb_referer_host|sccid|si|siteurl|sms_click|sms_source|sms_uph|srsltid|toolid|trk_contact|trk_module|trk_msg|trk_sid|ttclid|twclid|utm_[a-z]+|utm_campaign|utm_content|utm_creative_format|utm_id|utm_marketing_tactic|utm_medium|utm_source|utm_source_platform|utm_term|vmcid|wbraid|yclid|zanpid)=[-_A-z0-9+(){}%.*]+&?", "");
            set req.url = regsub(req.url, "[?|&]+$", "");
        }
    
        # Only cache GET and HEAD requests
        if (req.method != "GET" && req.method != "HEAD") {
            set req.http.X-Cacheable = "NO:REQUEST-METHOD";
            return(pass);
        }
    
        # Mark static files with the X-Static-File header, and remove any cookies
        # X-Static-File is also used in vcl_backend_response to identify static files
        if (req.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|ogg|ogm|opus|otf|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
            set req.http.X-Static-File = "true";
            unset req.http.Cookie;
            return(hash);
        }
    
        # No caching of special URLs, logged in users and some plugins
        if (
            req.http.Cookie ~ "wordpress_(?!test_)[a-zA-Z0-9_]+|wp-postpass|comment_author_[a-zA-Z0-9_]+|woocommerce_cart_hash|woocommerce_items_in_cart|wp_woocommerce_session_[a-zA-Z0-9]+|wordpress_logged_in_|comment_author|PHPSESSID" ||
            req.http.Authorization ||
            req.url ~ "add_to_cart" ||
            req.url ~ "edd_action" ||
            req.url ~ "nocache" ||
            req.url ~ "^/addons" ||
            req.url ~ "^/bb-admin" ||
            req.url ~ "^/bb-login.php" ||
            req.url ~ "^/bb-reset-password.php" ||
            req.url ~ "^/cart" ||
            req.url ~ "^/checkout" ||
            req.url ~ "^/control.php" ||
            req.url ~ "^/login" ||
            req.url ~ "^/logout" ||
            req.url ~ "^/lost-password" ||
            req.url ~ "^/my-account" ||
            req.url ~ "^/product" ||
            req.url ~ "^/register" ||
            req.url ~ "^/register.php" ||
            req.url ~ "^/server-status" ||
            req.url ~ "^/signin" ||
            req.url ~ "^/signup" ||
            req.url ~ "^/stats" ||
            req.url ~ "^/wc-api" ||
            req.url ~ "^/wp-admin" ||
            req.url ~ "^/wp-comments-post.php" ||
            req.url ~ "^/wp-cron.php" ||
            req.url ~ "^/wp-login.php" ||
            req.url ~ "^/wp-activate.php" ||
            req.url ~ "^/wp-mail.php" ||
            req.url ~ "^/wp-login.php" ||
            req.url ~ "^\?add-to-cart=" ||
            req.url ~ "^\?wc-api=" ||
            req.url ~ "^/preview=" ||
            req.url ~ "^/\.well-known/acme-challenge/"
        ) {
                 set req.http.X-Cacheable = "NO:Logged in/Got Sessions";
                 if(req.http.X-Requested-With == "XMLHttpRequest") {
                         set req.http.X-Cacheable = "NO:Ajax";
                 }
            return(pass);
        }
    
        # Cache pages with cookies for non-personalized content
        if (!req.http.Cookie || req.http.Cookie ~ "wmc_ip_info|wmc_current_currency|wmc_current_currency_old") {
            return(hash);  # Cache the response despite cookies
        }
    
        # Remove any cookies left
        unset req.http.Cookie;
        return(hash);
    
    }
    
    sub vcl_hash {
        if(req.http.X-Forwarded-Proto) {
            # Create cache variations depending on the request protocol
            hash_data(req.http.X-Forwarded-Proto);
        }
    }
    
    sub vcl_backend_response {
        # Inject URL & Host header into the object for asynchronous banning purposes
        set beresp.http.x-url = bereq.url;
        set beresp.http.x-host = bereq.http.host;
    
        # If we dont get a Cache-Control header from the backend
        # we default to 1h cache for all objects
        if (!beresp.http.Cache-Control) {
            set beresp.ttl = 1h;
            set beresp.http.X-Cacheable = "YES:Forced";
        }
    
        # If the file is marked as static we cache it for 1 day
        if (bereq.http.X-Static-File == "true") {
            unset beresp.http.Set-Cookie;
            set beresp.http.X-Cacheable = "YES:Forced";
            set beresp.ttl = 1d;
        }
    
            # Remove the Set-Cookie header when a specific Wordfence cookie is set
        if (beresp.http.Set-Cookie ~ "wfvt_|wordfence_verifiedHuman") {
                unset beresp.http.Set-Cookie;
             }
    
        if (beresp.http.Set-Cookie) {
            set beresp.http.X-Cacheable = "NO:Got Cookies";
        } elseif(beresp.http.Cache-Control ~ "private") {
            set beresp.http.X-Cacheable = "NO:Cache-Control=private";
        }
    }
    
    sub vcl_deliver {
        # Debug header
        if(req.http.X-Cacheable) {
            set resp.http.X-Cacheable = req.http.X-Cacheable;
        } elseif(obj.uncacheable) {
            if(!resp.http.X-Cacheable) {
                set resp.http.X-Cacheable = "NO:UNCACHEABLE";
            }
        } elseif(!resp.http.X-Cacheable) {
            set resp.http.X-Cacheable = "YES";
        }
    
        # Cleanup of headers
        unset resp.http.x-url;
        unset resp.http.x-host;
    }
    
    server {
        listen 6216;
        listen [::]:6216;
        server_name localhost;
        {{root}}
      
        try_files $uri $uri/ /index.php?$args;
        index index.php index.html;
      
        location ~ \.php$ {
          include fastcgi_params;
          fastcgi_intercept_errors on;
          fastcgi_index index.php;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
          try_files $uri =404;
          fastcgi_read_timeout 3600;
          fastcgi_send_timeout 3600;
          fastcgi_param HTTPS "on";
          fastcgi_param SERVER_PORT 443;
          fastcgi_pass 127.0.0.1:{{php_fpm_port}};
          fastcgi_param PHP_VALUE "{{php_settings}}";
        }
      
        # Static files handling
        location ~* ^.+\.(css|js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|woff2|eot|mp4|ogg|ogv|webm|webp|zip|swf|map)$ {
          add_header Access-Control-Allow-Origin "*";
          expires max;
          access_log off;
        }
      
        location /.well-known/traffic-advice {
          types { } default_type "application/trafficadvice+json; charset=utf-8";
        }
      
        if (-f $request_filename) {
          break;
        }
      }

    🔁 Replace YOUR_SERVER_IP with your actual public server IP or realip passed by Cloudflare.

    🔁 Add or replace /my-account and others, if it has been translated or changed

    The vcl_recv, vcl_backend_response, vcl_deliver sections include:

    • Cookie & header normalization
    • Full WooCommerce & WordPress compatibility
    • Query string cleaning (especially for tracking)
    • Support for regex-based purging via Varnish HTTP Purge plugin

    👉 Tip: This config already handles X-Forwarded-Proto, removes unneeded cookies, and bans/purges objects precisely.

    2. NGINX Configuration

    You’ll need:

    • One server block to redirect www to non-www
    • One public-facing block that proxies traffic to Varnish (port 6081)
    • One internal block on 127.0.0.1:6216 to serve PHP files
    • Optional: block on port 8080 for wp-admin bypass

    A. Redirect www to non-www

    server {
      listen 80;
      listen 443 ssl http2;
      server_name www.example.com;
      return 301 https://example.com$request_uri;
    }

    B. Public Server Block (Varnish Frontend)

    server {
      listen 80;
      listen 443 ssl http2;
      server_name example.com;
    
      ssl_certificate     /path/to/cert.pem;
      ssl_certificate_key /path/to/key.pem;
    
      real_ip_header CF-Connecting-IP;
    
      location / {
        proxy_pass http://127.0.0.1:6081;  # Varnish listener
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
      }
    
      location ~ \.(css|js|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot|mp4|webp|zip)$ {
        expires max;
        access_log off;
      }
    
      location ~ /(wp-login\.php|wp-admin/) {
        proxy_pass http://127.0.0.1:8080;  # Bypass Varnish
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
      }
    }

    C. Internal PHP Handler (for Varnish)

    server {
      listen 6216;
      server_name localhost;
    
      root /var/www/html;
      index index.php index.html;
    
      location / {
        try_files $uri $uri/ /index.php?$args;
      }
    
      location ~ \.php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass 127.0.0.1:9000;  # Adjust to your PHP-FPM port
        fastcgi_read_timeout 300;
      }
    }

    D. Optional: wp-admin Direct Access

    server {
      listen 8080;
      server_name example.com;
    
      root /var/www/html;
      index index.php index.html;
    
      location / {
        try_files $uri $uri/ /index.php?$args;
      }
    
      location ~ \.php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass 127.0.0.1:9000;
      }
    }

    3. System & Service Setup

    Install and Enable Varnish:

    apt/yum install varnish

    Edit systemd file (for port 80 → 6081):

    sudo nano /etc/systemd/system/multi-user.target.wants/varnish.service

    Update ExecStart line:

    ExecStart=/usr/sbin/varnishd -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -s malloc,512m

    Reload systemd:

    systemctl daemon-reexec
    systemctl restart varnish

    4. Adjust WordPress

    Install plugin for auto-purging:


    5. Testing the Cache

    Run:

    curl -I https://example.com -H "Host: example.com"

    Check headers:

    X-Cacheable: YES

    Use varnishlog to debug:

    varnishlog -g request -q "ReqMethod eq 'GET'"

    What You MUST Change:

    PlaceholderReplace With
    YOUR_SERVER_IPYour public server IP (IPv4)
    example.comYour real domain
    /var/www/htmlYour actual WordPress root path
    127.0.0.1:9000Your PHP-FPM socket or port
    SSL cert pathsUse valid paths to cert.pem and key
    Varnish memory (512m)Adjust according to your RAM

    ⚠️ Notes

    • Ensure that port 6216 and 6081 are not in use by another process.
    • If you’re using Cloudflare, make sure you’re not caching HTML there unless you know what you’re doing.
    • Purge logic depends on the WordPress plugin, or you can implement X-Purge-Method: regex support manually (already included above).
    • Do NOT cache wp-admin or login pages.

    If you need help with load balancing with Varnish or any other advanced configuration, feel free to reach out 🙂

  • Serve WP Rocket Cache Directly via Apache – Rocket-Nginx Style

    Serve WP Rocket Cache Directly via Apache – Rocket-Nginx Style

    If you’re using WP Rocket and want to drastically improve performance, one of the best tricks is to bypass WordPress and PHP entirely when a static cache file exists. While Rocket-Nginx offers this for NGINX servers, Apache users can achieve the same result using smart .htaccess rules.

    Here’s how to do it.


    The Problem

    By default, WP Rocket cache is generated into:

    wp-content/cache/wp-rocket/your-domain.com/
    

    But Apache doesn’t know to check there first. So every request, even if cached, still hits WordPress and PHP — wasting resources.


    The Solution: .htaccess Rewrite Rules

    You can use .htaccess to check if a static cached file exists and serve it immediately, skipping WordPress completely.

    Add this to your root .htaccess file:

    <IfModule mod_mime.c>
      AddEncoding gzip .html_gzip
      AddType text/html .html_gzip
    </IfModule>
    
    <IfModule mod_headers.c>
      <FilesMatch "\.html_gzip$">
        Header set Content-Encoding gzip
        Header set Content-Type "text/html; charset=UTF-8"
      </FilesMatch>
    </IfModule>
    
    <IfModule mod_rewrite.c>
      RewriteEngine On
    
      # Define cache folder
      RewriteCond %{HTTP_HOST} ^(www\.)?(.+)$ [NC]
      RewriteRule .* - [E=HOST:%2]
    
      # Serve gzipped cache if supported and exists
      RewriteCond %{REQUEST_METHOD} GET
      RewriteCond %{HTTP_COOKIE} !(comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in) [NC]
      RewriteCond %{HTTPS} on
      RewriteCond %{HTTP:Accept-Encoding} gzip
      RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html_gzip -f
      RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html_gzip [L]
    
      # Serve HTTPS non-gzipped cache if exists
      RewriteCond %{HTTPS} on
      RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html -f
      RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html [L]
    
      # Serve HTTP gzipped cache if supported and exists
      RewriteCond %{REQUEST_METHOD} GET
      RewriteCond %{HTTP_COOKIE} !(comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in) [NC]
      RewriteCond %{HTTPS} off
      RewriteCond %{HTTP:Accept-Encoding} gzip
      RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html_gzip -f
      RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html_gzip [L]
    
      # Serve HTTP non-gzipped cache if exists
      RewriteCond %{HTTPS} off
      RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html -f
      RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html [L]
    </IfModule>

    Result

    With these rules:

    • Visitors get blazing fast static HTML delivery
    • WordPress and PHP are completely bypassed if cache exists
    • Resource usage drops and TTFB improves dramatically

    This is the closest you can get to Rocket-Nginx behavior on Apache, with zero plugins or server-level modules required.

  • How a PostgreSQL Vulnerability Led to a Crypto Mining Malware Infection: An Incident Analysis

    How a PostgreSQL Vulnerability Led to a Crypto Mining Malware Infection: An Incident Analysis

    Date of Incident: May 5, 2025
    Affected System: AlmaLinux 9.5 with cPanel & PostgreSQL

    Incident Summary

    A Linux server running cPanel with PostgreSQL was compromised through a misconfigured PostgreSQL service, which allowed an attacker to upload and execute a malicious binary called cpu_hu. This ELF executable is part of a known crypto mining malware campaign, which abuses PostgreSQL’s permissions to spawn unauthorized processes.


    Indicators of Compromise (IOCs)

    • Suspicious binary: /var/lib/pgsql/data/base/13494/cpu_hu
    • Crontab entries for postgres user triggering binary execution
    • Audit logs showing executions of /usr/bin/s-nail, /usr/sbin/sendmail, and /usr/sbin/exim with UID 26 (PostgreSQL user)
    • Kernel logs in /var/log/messages showing:Killing process <PID> (cpu_hu) with signal SIGKILL
    • PostgreSQL failing to restart due to improper permissions after remediation

    Attack Vector & Execution

    The attacker exploited a misconfigured PostgreSQL installation with either:

    • trust authentication enabled
    • publicly accessible port 5432
    • the ability to execute arbitrary shell commands via SQL extensions like COPY TO PROGRAM

    Once inside, the attacker used the postgres user to:

    1. Upload the ELF binary
    2. Schedule its execution via cron

    Containment & Mitigation Steps

    Step 1: Kill Malware Processes

    pkill -f cpu_hu

    Step 2: Remove Binary

    find / -type f -name '*cpu_hu*' -delete

    Step 3: Clean Up Crontab

    crontab -u postgres -r

    Step 4: Lock Down PostgreSQL

    Edit postgresql.conf and add:

    session_preload_libraries = ''

    Restart service.

    -- Inside psql:
    ALTER USER postgres PASSWORD 'new-strong-password';
    REVOKE EXECUTE ON FUNCTION pg_ls_dir(text) FROM PUBLIC;
    REVOKE EXECUTE ON FUNCTION pg_read_file(text) FROM PUBLIC;
    REVOKE EXECUTE ON FUNCTION pg_stat_file(text) FROM PUBLIC;

    Step 5: Set Correct Permissions

    chown -R postgres:postgres /var/lib/pgsql/data
    chmod 700 /var/lib/pgsql/data
    systemctl restart postgresql

    Step 6: Block Public Access

    iptables -A INPUT -p tcp --dport 5432 -s <trusted_ip> -j ACCEPT
    iptables -A INPUT -p tcp --dport 5432 -j DROP

    Lessons Learned

    • Always restrict PostgreSQL to localhost or VPN access
    • Disable dangerous features like COPY TO PROGRAM unless absolutely required
    • Use auditd rules to track mail/sendmail invocations by non-root users:
    audictl -a always,exit -F arch=b64 -S execve -F uid=26 -F path=/usr/bin/s-nail -k mail_postgres_exec
    • Regularly inspect crontabs and non-root user binaries in /var/lib/ and /tmp

    References


    Status: Resolved

  • Fix cPanel ownership and permissions [Script]

    Fix cPanel ownership and permissions [Script]

    When a cPanel server experiences file permission issues-after a migration, manual file operations, or a misbehaving script-websites may become inaccessible, emails may fail, or security might be at risk. This script automates the process of fixing file ownership and permissions for one or more cPanel users, ensuring everything is back to a secure and functional state.

    Use Case

    You may need to run this script when:

    • Website files show 403 Forbidden errors
    • Email delivery fails due to etc/ permissions
    • Files were copied or restored without --preserve flags
    • CageFS directories have incorrect modes

    How to run the script

    for i in `ls -A /var/cpanel/users` ; do ./fixperms $i ; done

    The Script (save as ./fixperms and chmod +x fixperms)

    #!/bin/bash
    # Script to fix permissions and ownerships for one or more cPanel users
    
    if [ "$#" -lt "1" ]; then
      echo "Must specify at least one user"
      exit 1
    fi
    
    USERS=$@
    
    for user in $USERS; do
      HOMEDIR=$(getent passwd "$user" | cut -d: -f6)
    
      if [ ! -f /var/cpanel/users/"$user" ]; then
        echo "User file missing for $user, skipping"
        continue
      elif [ -z "$HOMEDIR" ]; then
        echo "Could not determine home directory for $user, skipping"
        continue
      fi
    
      echo "Fixing ownership and permissions for $user"
    
      # Ownership
      chown -R "$user:$user" "$HOMEDIR" >/dev/null 2>&1
      chmod 711 "$HOMEDIR" >/dev/null 2>&1
      chown "$user:nobody" "$HOMEDIR/public_html" "$HOMEDIR/.htpasswds" 2>/dev/null
      chown "$user:mail" "$HOMEDIR/etc" "$HOMEDIR/etc/"*/shadow "$HOMEDIR/etc/"*/passwd 2>/dev/null
    
      # File permissions (parallel)
      find "$HOMEDIR" -type f -print0 2>/dev/null | xargs -0 -P4 chmod 644 2>/dev/null
      find "$HOMEDIR" -type d ! -name cgi-bin -print0 2>/dev/null | xargs -0 -P4 chmod 755 2>/dev/null
      find "$HOMEDIR" -type d -name cgi-bin -print0 2>/dev/null | xargs -0 -P4 chmod 755 2>/dev/null
    
      chmod 750 "$HOMEDIR/public_html" 2>/dev/null
    
      # CageFS fixes
      if [ -d "$HOMEDIR/.cagefs" ]; then
        chmod 775 "$HOMEDIR/.cagefs" 2>/dev/null
        chmod 700 "$HOMEDIR/.cagefs/tmp" "$HOMEDIR/.cagefs/var" 2>/dev/null
        chmod 777 "$HOMEDIR/.cagefs/cache" "$HOMEDIR/.cagefs/run" 2>/dev/null
      fi
    
    done

    This is a improved script from: https://www.casbay.com/guide/kb/script-to-fix-cpanel-account-permissions-2

  • Recovering MySQL Databases on a Crashed cPanel Server Without Backups

    Recovering MySQL Databases on a Crashed cPanel Server Without Backups

    When a cPanel server experiences catastrophic failure without any valid backups, restoring websites and databases manually becomes the only option. In my case, the server had completely failed and could only be accessed via a rescue environment. No backups were available in /backup, and the system was non-bootable due to critical library corruption.

    To recover, I mounted the failed system, manually transferred essential directories such as /var/lib/mysql and /home to a freshly installed cPanel server using rsync, and fixed ownership and permissions. This restored websites and database files physically, but cPanel/WHM did not recognize the MySQL databases or users.

    Problem: cPanel Doesn’t Recognize Existing MySQL Databases

    Although the database folders were correctly placed in /var/lib/mysql/ and all MySQL users were present in the mysql.user table, cPanel GUI showed no databases or users associated with any account.

    This is expected behavior — cPanel stores mappings between accounts, databases, and MySQL users in its own internal metadata files, which were not recoverable.

    Solution: Rebuild cPanel MySQL Mapping Using dbmaptool

    To restore MySQL database and user associations for each cPanel account without recreating them manually, I used the official cPanel utility:

    /usr/local/cpanel/bin/dbmaptool

    I created a script that:

    • Loops through all cPanel users (found in /var/cpanel/users)
    • For each user, finds all MySQL databases starting with the user’s prefix (e.g. user_db1)
    • Finds all MySQL users belonging to that prefix (e.g. user_dbuser1)
    • Automatically maps them using dbmaptool
    #!/bin/bash
    
    for user in $(ls /var/cpanel/users); do
      dbs=$(mysql -N -e "SHOW DATABASES LIKE '${user}\_%';" | tr '\n' ',' | sed 's/,\$//')
      dbusers=$(mysql -N -e "SELECT User FROM mysql.user WHERE User LIKE '${user}\_%';" | tr '\n' ',' | sed 's/,\$//')
    
      if [[ -n "$dbs" || -n "$dbusers" ]]; then
        echo "Mapping for user: $user"
        /usr/local/cpanel/bin/dbmaptool "$user" --type 'mysql' --dbs "$dbs" --dbusers "$dbusers"
      fi
    done

    Final Cache Refresh

    After running the script, I executed:

    /scripts/update_db_cache
    /scripts/updateuserdatacache

    This forced cPanel to reload and re-index the updated metadata, and all previously invisible databases and MySQL users reappeared in the cPanel UI for each respective account.

    Even in total system failure scenarios with no backups, if the /home and /var/lib/mysql directories are intact and MySQL users are present, it’s entirely possible to recover a cPanel environment manually. The key is to re-establish metadata associations using dbmaptool, which tells cPanel which databases and users belong to which accounts.

  • Istina o “popcorn plućima” i vejpovanju: Zašto nemaš razloga za paniku

    Istina o “popcorn plućima” i vejpovanju: Zašto nemaš razloga za paniku

    U poslednje vreme sve češće se na društvenim mrežama pojavljuju objave koje povezuju vejpovanje sa tzv. „popcorn plućima“. Nažalost, većina tih objava je zasnovana na zastarelim i pogrešno interpretiranim informacijama koje više nemaju nikakvu težinu u savremenom kontekstu.

    Šta su zapravo “popcorn pluća”?

    „Popcorn pluća“ je kolokvijalni naziv za bronhiolitis obliterans, retko i ozbiljno oboljenje pluća koje je svojevremeno povezano sa hemikalijom diacetil. Diacetil je korišćen kao aroma u prehrambenoj industriji (npr. za ukus putera u kokicama), ali je udisanje visokih koncentracija ove supstance povezano sa oštećenjem disajnih puteva.

    Kako je diacetil povezan sa e-cigaretama?

    Još 2014. godine, jedno istraživanje otkrilo je da se diacetil nalazi u oko 70% ispitivanih e-tečnosti. To je izazvalo zabrinutost — s razlogom. Međutim, industrija je brzo reagovala: od 2018. godine, gotovo nijedna renomirana kompanija ne koristi diacetil u svojim tečnostima.

    Kanadska studija iz 2021. testirala je 825 različitih e-cigareta. Rezultat? Samo dva proizvoda su sadržala diacetil, i nijedan od njih nije proizveden posle 2018.

    Cigarete su daleko opasnije

    Zanimljivo je da klasične cigarete sadrže znatno više diacetila nego e-cigarete ikada. Ipak, retko ko pominje to kada treba demonizovati vejp.

    Drugim rečima: ako si do sada pušio cigarete, pa prešao na vejp — već si uradio ogroman korak za svoje zdravlje.

    Šta se desilo sa onima koji su završili u bolnici?

    Veliki broj slučajeva koji su završili u bolnici, naročito u SAD, nije imao veze sa nikotinskim tečnostima, već sa nelegalnim THC tečnostima sa crnog tržišta. Te supstance često sadrže vitamin E acetat, koji je izuzetno opasan za pluća kada se udiše.
    Dakle — nije problem u vejpu, već u neproverenim, ilegalnim proizvodima.

    Zaključak: Veći je rizik od senzacionalizma nego od samog vejpa

    Ako koristiš proverene tečnosti od ozbiljnih proizvođača i ne kupuješ „ulicu u boci“, nemaš razloga za paniku. Vejping nije bez rizika, ali je neuporedivo manje štetan od pušenja. I što je najvažnije — mit o „popcorn plućima“ je naučno razobličen i više nema mesta u ozbiljnoj raspravi.

    Detaljnije čitaj ovde:
    https://reason.org/backgrounder/debunking-the-myth-that-vaping-causes-popcorn-lung


  • Enable DKIM and Email service on all domains plesk

    Enable DKIM and Email service on all domains plesk

    If you’re managing a Plesk server with multiple domains and want to enable DKIM signing for all of them quickly, this guide is for you.

    Why DKIM Matters

    DKIM (DomainKeys Identified Mail) helps prevent email spoofing by attaching a cryptographic signature to outgoing messages. Without DKIM, your emails are more likely to land in spam or be rejected entirely — especially by strict providers like Gmail, Outlook, or Yahoo.

    The CLI Way: Enable DKIM for All Domains in Seconds

    for domain in $(plesk bin domain --list); do
        plesk bin site -u "$domain" -mail_service true;
        plesk bin domain_pref --update "$domain" -sign_outgoing_mail true;
    done

    What this script does:

    • Lists all domains on your Plesk server.
    • Ensures that the mail service is enabled for each domain.
    • Enables DKIM signing for outgoing emails on each domain.

    If you don’t use Plesk for DNS (e.g., you’re on Cloudflare, Route53, or external nameservers), you’ll need to manually copy each domain’s DKIM public key to its DNS zone.

    Enabling DNS for all domains:

    for domain in $(plesk bin domain --list); do
        echo "Enabling DNS for $domain"
        plesk bin site -u "$domain" -dns true
    done

    Before running this command temporarily disable “Use the serial number format recommended by IETF and RIPE” in Tools & Settings > DNS Settings > Zone Settings Template