Author: Luka Paunović

  • filemng: Fixing Plesk Process Hang Caused by Missing or Broken Binary

    filemng: Fixing Plesk Process Hang Caused by Missing or Broken Binary

    Fixing the Plesk filemng Problem: Missing or Broken Binary

    If you ever encountered a situation where Plesk-related operations like backups, extension uninstallations, or file operations hang indefinitely, the root cause could be a broken or missing filemng binary. In this article, we’ll explain why it happens, how to diagnose it, and how to fix it by manually extracting the correct binary from a Plesk RPM package.


    What Is filemng in Plesk?

    The filemng tool is a critical internal Plesk binary responsible for secure file operations under the correct user permissions. Virtually every file or directory manipulation within Plesk (for example, during backups, migrations, or UI file management) uses filemng under the hood.

    If filemng is missing or corrupted, you will experience:

    • Hanging backup tasks
    • Failed extension uninstalls
    • Stuck migrations
    • Inability to manage hosting files properly

    Symptoms of a filemng Issue

    • Backups hang forever.
    • Commands like /usr/local/psa/admin/bin/filemng root file_exists /path/to/file never complete.
    • Manual execution of filemng returns errors like:
    filemng: execve failed: No such file or directory
    System error 2
    • Errors during Plesk repair procedures:
    Expected and actual types of file '/usr/local/psa/admin/sbin/filemng' do not match: file != symlink.

    Why Does This Happen?

    Normally, /usr/local/psa/admin/sbin/filemng should be a symlink to /opt/psa/admin/sbin/filemng. If the target binary is missing or the symlink is broken, Plesk tasks will break.

    This can happen due to:

    • Incomplete Plesk updates
    • Accidental file deletion
    • Filesystem corruption
    • Faulty RPM packaging in some rare Plesk builds

    How to Fix the filemng Problem

    Here’s the complete recovery guide:

    Step 1: Confirm the Problem

    Check if filemng exists:

    ls -lh /usr/local/psa/admin/sbin/filemng

    If the binary or symlink is missing or broken, proceed.


    Step 2: Download the Correct RPM

    Download a working version of plesk-service-node-utilities, which includes filemng.

    Example for Plesk 18.0.59 (CentOS 7):

    wget https://autoinstall.plesk.com/PSA_18.0.68/dist-rpm-CentOS-7-x86_64/base/plesk-service-node-utilities-18.0-2.centos.7+p18.0.68.2+t250319.0858.x86_64.rpm

    Step 3: Extract the filemng Binary

    Create a folder to extract contents:

    mkdir /root/unpacked_filemng
    cd /root/unpacked_filemng

    Extract the RPM content:

    rpm2cpio plesk-service-node-utilities-18.0-2.centos.7+p18.0.68.2+t250319.0858.x86_64.rpm | cpio -idmv

    Locate the extracted filemng:

    ls -lh ./usr/local/psa/admin/sbin/filemng

    Step 4: Copy the Binary to the Correct Location

    If the binary exists in the unpacked folder, copy it:

    rm -f /usr/local/psa/admin/sbin/filemng
    cp ./usr/local/psa/admin/sbin/filemng /usr/local/psa/admin/sbin/filemng 
    chmod 755 /usr/local/psa/admin/sbin/filemng 
    chown root:root /usr/local/psa/admin/sbin/filemng

    Ensure correct permissions and ownership.


    Step 6: Restart Plesk Services

    Finally, restart Plesk services to reinitialize the environment:

    systemctl restart psa
    systemctl restart sw-engine
    systemctl restart sw-cp-server

    Final Test

    Now you can test filemng manually:

    echo "test" > /usr/local/psa/tmp/testfile.txt
    
    /usr/local/psa/admin/bin/filemng root file_exists /usr/local/psa/tmp/testfile.txt --allow-root

    If it returns 0, it means everything works!


    Conclusion

    A missing or broken filemng binary can cripple your Plesk server’s basic functionalities.
    By extracting it from the correct RPM package and restoring it manually, you can fully recover the system without requiring a full Plesk reinstall.

    If you found this guide helpful, consider bookmarking it — these low-level fixes can save you a lot of downtime in future emergencies!

  • Fix Loopback, Action-Scheduler past-due actions and REST API 403 Errors on WordPress Behind Cloudflare

    Fix Loopback, Action-Scheduler past-due actions and REST API 403 Errors on WordPress Behind Cloudflare

    If your WordPress site is behind Cloudflare, you may run into issues where WP-Cron, loopback requests, or the REST API stop working properly. This typically happens because Cloudflare blocks server-to-server requests from your own domain due to bot protection, JavaScript challenges, or rate limiting.


    The Problem

    WordPress internally calls its own URLs (like wp-cron.php or REST API endpoints) using HTTP requests. When your domain is protected by Cloudflare, those internal requests may get blocked with a 403 Forbidden response — even though everything works fine for real visitors.

    This breaks important features like:

    • Cron jobs (DISABLE_WP_CRON or background tasks)
    • Plugin updates and checks
    • Site Health REST API tests
    • Some block editor functionality

    What You See in Site Health

    Under Tools > Site Health, you may encounter this warning:

    The REST API encountered an unexpected result

    The REST API is one way that WordPress and other applications communicate with the server. For example, the block editor screen relies on the REST API to display and save your posts and pages.

    REST API Endpoint: https://your-site.com/wp-json/wp/v2/types/post?context=edit
    REST API Response: (403) Forbidden

    Action Scheduler may show this notice:

    Action Scheduler: 31 past-due actions found; something may be wrong. Read documentation

    You may also see that WP-Cron or plugin update checks silently fail, especially if you’re using Cloudflare Bot Fight Mode or JS Challenge settings.


    The Solution: Bypass DNS and Use Direct IP with Host Header

    To fix this, we can intercept all outgoing HTTP requests from WordPress that go to your-site.com and force them to use the direct IP address, while still sending the proper Host header (your-site.com).

    🛠 Add This Code to Your functions.php

    add_action( 'http_api_curl', function( $handle, $r, $url ) {
        if ( strpos( $url, 'your-site.com' ) !== false ) {
            // Host header
            curl_setopt( $handle, CURLOPT_HTTPHEADER, array(
                'Host: your-site.com'
            ) );
    
            // Override URL to use direct IP
            $ip = '167.253.159.232';
            $new_url = str_replace( 'your-site.com', $ip, $url );
            curl_setopt( $handle, CURLOPT_URL, $new_url );
    
            // Optional: skip SSL verification if HTTPS
            curl_setopt( $handle, CURLOPT_SSL_VERIFYHOST, false );
            curl_setopt( $handle, CURLOPT_SSL_VERIFYPEER, false );
        }
    }, 10, 3 );
    

    🧠 Why It Works

    • WordPress core uses wp_remote_get() and similar functions to communicate with itself.
    • Normally, those requests resolve the domain via DNS — which goes through Cloudflare.
    • With this hook, we force the request to use the real server IP, bypassing Cloudflare.
    • The Host header ensures that WordPress still treats the request as coming to your-site.com.

    Summary

    If you’re getting 403 Forbidden errors when WordPress tries to call itself (for cron jobs, REST API, or updates), and your site is behind Cloudflare, this trick forces internal requests to talk directly to the origin server.

    No more Cloudflare blockages. Just clean internal communication, like it should be.

  • Wie fügt man eine zusätzliche IP-Adresse bei Netcup auf einem Debian-Server hinzu?

    Wie fügt man eine zusätzliche IP-Adresse bei Netcup auf einem Debian-Server hinzu?

    Wenn du bei Netcup eine zusätzliche IPv4-Adresse (meist als /32 routed IP) zu deinem Server hinzugefügt hast, musst du diese manuell auf dem Server konfigurieren. In diesem Artikel zeige ich dir, wie du das korrekt unter Debian 12 einrichtest.

    Schritt 1: IP im Netcup Panel routen

    1. Melde dich im Netcup Kundenpanel an.
    2. Gehe zu deinem vServer oder Root-Server.
    3. Klicke auf „IP-Adressen“.
    4. Wähle die zusätzliche IP aus (z. B. 81.16.19.220).
    5. Klicke auf „IP auf Server routen“.

    Ohne diesen Schritt wird die IP nicht an deinen Server weitergeleitet!


    Schritt 2: IP-Adresse unter Debian hinzufügen

    Falls du DHCP verwendest (Standard bei vielen Netcup-Images), kannst du die zusätzliche IP mit einem systemd-Service dauerhaft hinzufügen.

    Erstelle das Skript:

    sudo nano /etc/network/additional-ip.sh

    Füge folgenden Inhalt ein:

    #!/bin/bash
    ip addr add 81.16.19.220/32 dev ens3
    ip route add 81.16.19.220/32 dev ens3

    Speichern und ausführbar machen:

    sudo chmod +x /etc/network/additional-ip.sh

    Erstelle den systemd-Service:

    sudo nano /etc/systemd/system/additional-ip.service

    Inhalt:

    [Unit]
    Description=Zusätzliche IP-Adresse hinzufügen
    After=network.target

    [Service]
    Type=oneshot
    ExecStart=/etc/network/additional-ip.sh
    RemainAfterExit=yes

    [Install]
    WantedBy=multi-user.target

    Dann aktivieren:

    sudo systemctl daemon-reload
    sudo systemctl enable --now additional-ip.service

    Testen

    Nach dem Reboot oder sofort nach Aktivierung kannst du mit ip a prüfen, ob die IP aktiv ist:

    ip addr show ens3

    Und von extern mit:

    ping 81.16.19.220

    📌 Hinweis: Diese IP kann z. B. in NGINX oder Mailservern gezielt verwendet werden, indem man sie im Service bindet.

  • Solving Cross-Subnet VM Communication in Hetzner

    Solving Cross-Subnet VM Communication in Hetzner

    Note: All IP addresses in this article have been modified to protect client privacy

    When managing virtual machines across different subnets in a node with Virtualizor environment, you might encounter networking challenges that aren’t immediately obvious. I recently tackled such an issue in a Hetzner where VMs with IPs from different subnets couldn’t communicate with each other.

    The Problem

    Here was my setup:

    • A node Virtualizor with a main IP address
    • Three different subnets allocated:
      • 192.168.1.0/29 (connected to natbr8)
      • 10.10.20.0/29 (connected to natbr7)
      • 172.16.5.0/29 (connected to natbr5)
    • VMs with IPs from different subnets (specifically 10.10.20.6 and 192.168.1.4) couldn’t talk to each other

    While some might suggest that Hetzner blocks cross-subnet communication by default, the reality is more nuanced. Hetzner doesn’t inherently block such communication – the issue is that proper routing configuration is needed to enable it.

    The Solution

    After troubleshooting, I found that solving this problem required configuring several networking components:

    1. Host-level NAT and Forwarding Rules

    First, I needed proper NAT masquerading for traffic between subnets:

    # NAT rules for cross-subnet communication
    iptables -t nat -A POSTROUTING -s 192.168.1.0/29 -d 10.10.20.0/29 -j MASQUERADE
    iptables -t nat -A POSTROUTING -s 10.10.20.0/29 -d 192.168.1.0/29 -j MASQUERADE

    Then I needed to ensure packet forwarding between network bridges:

    # Allow forwarding between bridges
    iptables -A FORWARD -i natbr7 -o natbr8 -j ACCEPT
    iptables -A FORWARD -i natbr8 -o natbr7 -j ACCEPTiptables -A FORWARD -o natbr7 -m state --state RELATED,ESTABLISHED -j ACCEPT
    iptables -A FORWARD -o natbr8 -m state --state RELATED,ESTABLISHED -j ACCEPT

    2. IP Routing Rules

    Special routing rules were needed to correctly handle traffic between the specific VMs:

    # Add routing rules for specific VMs
    ip rule add from 10.10.20.6 to 192.168.1.4
    ip rule add from 192.168.1.4 to 10.10.20.6

    3. VM-level Configuration

    Inside the VM with IP 192.168.1.4, I added a specific route:

    ip route add 10.10.20.6 via 192.168.1.1 dev eth0

    The VM with IP 10.10.20.6 already had appropriate routing via its default gateway:

    192.168.1.0/29 via 10.10.20.1 dev eth0

    How It Works

    With this configuration, here’s how traffic flows:

    1. When VM 192.168.1.4 sends a packet to VM 10.10.20.6:
      • The packet gets routed through gateway 192.168.1.1
      • The host applies NAT masquerading
      • The packet is forwarded from natbr8 to natbr7
      • The packet arrives at VM 10.10.20.6
    2. When VM 10.10.20.6 sends a packet to VM 192.168.1.4:
      • The packet gets routed through gateway 10.10.20.1
      • The host applies NAT masquerading
      • The packet is forwarded from natbr7 to natbr8
      • The packet arrives at VM 192.168.1.4

    Lessons Learned

    This experience taught me several important things about cloud networking:

    1. Provider policies aren’t always the culprit – While some cloud providers do restrict cross-subnet communication, often the issue is just proper configuration.
    2. Layer by layer troubleshooting is essential – Working through each networking layer (VM routes, host forwarding, NAT, etc.) methodically led to the solution.
    3. VM-level routing matters – Even with correct host configuration, each VM needs to know how to route packets to other subnets.
    4. Documentation is crucial – After fixing the issue, documenting the solution thoroughly saved time when I needed to replicate or modify the setup later.

    For anyone facing similar issues in Hetzner or other cloud environments, I recommend examining your routing tables, NAT rules, and forwarding configurations at both the host and VM levels. The problem is almost always solvable with the right networking configuration.

    Do you face similar networking challenges in your infrastructure?

  • How to install myVesta

    How to install myVesta

    Test

    How to install

    Download the installation script:

    curl -O http://c.myvestacp.com/vst-install-debian.sh

    Then run it:

    bash vst-install-debian.sh

    .

  • How to Safely Update n8n in a Docker Container (2025 Guide)

    How to Safely Update n8n in a Docker Container (2025 Guide)

    Locate docker containers:

    docker ps

    You should see Caddy reverse proxy and N8N container.

    CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                                                                                             NAMES
    8d3a13c59a08   n8nio/n8n      "tini -- /docker-ent…"   13 minutes ago   Up 6 minutes    0.0.0.0:5678->5678/tcp, [::]:5678->5678/tcp                                                       n8n
    8917b5af4d16   caddy:latest   "caddy run --config …"   4 weeks ago      Up 37 minutes   0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 443/udp, 2019/tcp   n8n-docker-caddy-caddy-1

    Locate volume data:

    root@docker-n8n-droplet:~/.n8n# docker volume ls
    DRIVER    VOLUME NAME
    local     caddy_data
    local     n8n_data

    In my case data is located at n8n_data.


    Backup it up (replace n8n_data with your volume name):

    docker run --rm -v n8n_data:/data -v "$HOME/n8n-volume-backup:/backup" alpine sh -c "cd /data && tar czf /backup/n8n_data_backup.tar.gz ."
    

    Stop N8N Container

    docker stop <container_id>

    Remove the old N8N container (doesn’t remove data):

    docker rm <container_id>

    WARNING: Do not remove Caddy container, Caddy is a web server which acts as reverse proxy n8n container.

    Create docker-compose.yml inside ~/.n8n folder.

    mkdir ~/.n8n

    Place this inside docker-compose.yml:

    version: "3.7"
    
    services:
      n8n:
        image: n8nio/n8n
        container_name: n8n
        restart: unless-stopped
        ports:
          - "5678:5678"
        environment:
          - N8N_HOST=n8n.domain.com
          - N8N_PORT=5678
          - N8N_PROTOCOL=https
          - WEBHOOK_URL=https://n8n.domain.com/
          - GENERIC_TIMEZONE=Europe/Belgrade
        volumes:
          - n8n_data:/home/node/.n8n
    
    volumes:
      n8n_data:
        external: true

    Replace n8n_data with the name of your volume, replace n8n.domain.com with your actual domain name.

    Download latest N8N

    cd ~/.n8n
    docker compose pull

    Run N8N

    cd ~/n8n
    docker compose up -d

    Fix Caddy IP

    The IP of docker container for N8N will change, if inside your Caddyfile you have an IP set, instead of a hostname, this need to be changed.

    From docker ps get container ID for Caddy. Run:

    docker exec -it <container_id> sh
    vi /etc/caddy/Caddyfile

    Use this:

    n8n.domain.com {
        reverse_proxy http://n8n:5678 {
          flush_interval -1
        }
    }
    

    Usually instead of n8n you will see the IP.
    Put in n8n.

    Restart Caddy

    docker restart <container_id>

    Reconnect docker network

    docker network create n8n_net
    docker network connect n8n_net <n8n_container_id>
    docker network connect n8n_net <caddy_container_id>
    

    This should help update N8N that is running inside a docker on any setup 🙂

    If you run into questions of issues you can leave a comment or contact me. Thanks.