Your cart is currently empty!
Category: DevOps
Enable DKIM and Email service on all domains plesk
If you’re managing a Plesk server with multiple domains and want to enable DKIM signing for all of them quickly, this guide is for you.
Why DKIM Matters
DKIM (DomainKeys Identified Mail) helps prevent email spoofing by attaching a cryptographic signature to outgoing messages. Without DKIM, your emails are more likely to land in spam or be rejected entirely — especially by strict providers like Gmail, Outlook, or Yahoo.
The CLI Way: Enable DKIM for All Domains in Seconds
for domain in $(plesk bin domain --list); do plesk bin site -u "$domain" -mail_service true; plesk bin domain_pref --update "$domain" -sign_outgoing_mail true; done
What this script does:
- Lists all domains on your Plesk server.
- Ensures that the mail service is enabled for each domain.
- Enables DKIM signing for outgoing emails on each domain.
If you don’t use Plesk for DNS (e.g., you’re on Cloudflare, Route53, or external nameservers), you’ll need to manually copy each domain’s DKIM public key to its DNS zone.
Enabling DNS for all domains:
for domain in $(plesk bin domain --list); do echo "Enabling DNS for $domain" plesk bin site -u "$domain" -dns true done
Before running this command temporarily disable “Use the serial number format recommended by IETF and RIPE” in Tools & Settings > DNS Settings > Zone Settings Template
Wie fügt man eine zusätzliche IP-Adresse bei Netcup auf einem Debian-Server hinzu?
Wenn du bei Netcup eine zusätzliche IPv4-Adresse (meist als /32 routed IP) zu deinem Server hinzugefügt hast, musst du diese manuell auf dem Server konfigurieren. In diesem Artikel zeige ich dir, wie du das korrekt unter Debian 12 einrichtest.
Schritt 1: IP im Netcup Panel routen
- Melde dich im Netcup Kundenpanel an.
- Gehe zu deinem vServer oder Root-Server.
- Klicke auf „IP-Adressen“.
- Wähle die zusätzliche IP aus (z. B.
81.16.19.220
). - Klicke auf „IP auf Server routen“.
Ohne diesen Schritt wird die IP nicht an deinen Server weitergeleitet!
Schritt 2: IP-Adresse unter Debian hinzufügen
Falls du DHCP verwendest (Standard bei vielen Netcup-Images), kannst du die zusätzliche IP mit einem systemd-Service dauerhaft hinzufügen.
Erstelle das Skript:
sudo nano /etc/network/additional-ip.sh
Füge folgenden Inhalt ein:
#!/bin/bash
ip addr add 81.16.19.220/32 dev ens3
ip route add 81.16.19.220/32 dev ens3Speichern und ausführbar machen:
sudo chmod +x /etc/network/additional-ip.sh
Erstelle den systemd-Service:
sudo nano /etc/systemd/system/additional-ip.service
Inhalt:
[Unit]
Description=Zusätzliche IP-Adresse hinzufügen
After=network.target
[Service]
Type=oneshot
ExecStart=/etc/network/additional-ip.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.targetDann aktivieren:
sudo systemctl daemon-reload
sudo systemctl enable --now additional-ip.serviceTesten
Nach dem Reboot oder sofort nach Aktivierung kannst du mit
ip a
prüfen, ob die IP aktiv ist:ip addr show ens3
Und von extern mit:
ping 81.16.19.220
📌 Hinweis: Diese IP kann z. B. in NGINX oder Mailservern gezielt verwendet werden, indem man sie im Service bindet.
Solving Cross-Subnet VM Communication in Hetzner
Note: All IP addresses in this article have been modified to protect client privacy
When managing virtual machines across different subnets in a node with Virtualizor environment, you might encounter networking challenges that aren’t immediately obvious. I recently tackled such an issue in a Hetzner where VMs with IPs from different subnets couldn’t communicate with each other.
The Problem
Here was my setup:
- A node Virtualizor with a main IP address
- Three different subnets allocated:
- 192.168.1.0/29 (connected to natbr8)
- 10.10.20.0/29 (connected to natbr7)
- 172.16.5.0/29 (connected to natbr5)
- VMs with IPs from different subnets (specifically 10.10.20.6 and 192.168.1.4) couldn’t talk to each other
While some might suggest that Hetzner blocks cross-subnet communication by default, the reality is more nuanced. Hetzner doesn’t inherently block such communication – the issue is that proper routing configuration is needed to enable it.
The Solution
After troubleshooting, I found that solving this problem required configuring several networking components:
1. Host-level NAT and Forwarding Rules
First, I needed proper NAT masquerading for traffic between subnets:
# NAT rules for cross-subnet communication iptables -t nat -A POSTROUTING -s 192.168.1.0/29 -d 10.10.20.0/29 -j MASQUERADE iptables -t nat -A POSTROUTING -s 10.10.20.0/29 -d 192.168.1.0/29 -j MASQUERADE
Then I needed to ensure packet forwarding between network bridges:
# Allow forwarding between bridges iptables -A FORWARD -i natbr7 -o natbr8 -j ACCEPT iptables -A FORWARD -i natbr8 -o natbr7 -j ACCEPT
iptables -A FORWARD -o natbr7 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -o natbr8 -m state --state RELATED,ESTABLISHED -j ACCEPT2. IP Routing Rules
Special routing rules were needed to correctly handle traffic between the specific VMs:
# Add routing rules for specific VMs ip rule add from 10.10.20.6 to 192.168.1.4 ip rule add from 192.168.1.4 to 10.10.20.6
3. VM-level Configuration
Inside the VM with IP 192.168.1.4, I added a specific route:
ip route add 10.10.20.6 via 192.168.1.1 dev eth0
The VM with IP 10.10.20.6 already had appropriate routing via its default gateway:
192.168.1.0/29 via 10.10.20.1 dev eth0
How It Works
With this configuration, here’s how traffic flows:
- When VM 192.168.1.4 sends a packet to VM 10.10.20.6:
- The packet gets routed through gateway 192.168.1.1
- The host applies NAT masquerading
- The packet is forwarded from natbr8 to natbr7
- The packet arrives at VM 10.10.20.6
- When VM 10.10.20.6 sends a packet to VM 192.168.1.4:
- The packet gets routed through gateway 10.10.20.1
- The host applies NAT masquerading
- The packet is forwarded from natbr7 to natbr8
- The packet arrives at VM 192.168.1.4
Lessons Learned
This experience taught me several important things about cloud networking:
- Provider policies aren’t always the culprit – While some cloud providers do restrict cross-subnet communication, often the issue is just proper configuration.
- Layer by layer troubleshooting is essential – Working through each networking layer (VM routes, host forwarding, NAT, etc.) methodically led to the solution.
- VM-level routing matters – Even with correct host configuration, each VM needs to know how to route packets to other subnets.
- Documentation is crucial – After fixing the issue, documenting the solution thoroughly saved time when I needed to replicate or modify the setup later.
For anyone facing similar issues in Hetzner or other cloud environments, I recommend examining your routing tables, NAT rules, and forwarding configurations at both the host and VM levels. The problem is almost always solvable with the right networking configuration.
Do you face similar networking challenges in your infrastructure?
How to Safely Update n8n in a Docker Container (2025 Guide)
Locate docker containers:
docker ps
You should see Caddy reverse proxy and N8N container.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8d3a13c59a08 n8nio/n8n "tini -- /docker-ent…" 13 minutes ago Up 6 minutes 0.0.0.0:5678->5678/tcp, [::]:5678->5678/tcp n8n 8917b5af4d16 caddy:latest "caddy run --config …" 4 weeks ago Up 37 minutes 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 443/udp, 2019/tcp n8n-docker-caddy-caddy-1
Locate volume data:
root@docker-n8n-droplet:~/.n8n# docker volume ls DRIVER VOLUME NAME local caddy_data local n8n_data
In my case data is located at n8n_data.
Backup it up (replace n8n_data with your volume name):docker run --rm -v n8n_data:/data -v "$HOME/n8n-volume-backup:/backup" alpine sh -c "cd /data && tar czf /backup/n8n_data_backup.tar.gz ."
Stop N8N Container
docker stop <container_id>
Remove the old N8N container (doesn’t remove data):
docker rm <container_id>
WARNING: Do not remove Caddy container, Caddy is a web server which acts as reverse proxy n8n container.
Create docker-compose.yml inside ~/.n8n folder.
mkdir ~/.n8n
Place this inside docker-compose.yml:
version: "3.7" services: n8n: image: n8nio/n8n container_name: n8n restart: unless-stopped ports: - "5678:5678" environment: - N8N_HOST=n8n.domain.com - N8N_PORT=5678 - N8N_PROTOCOL=https - WEBHOOK_URL=https://n8n.domain.com/ - GENERIC_TIMEZONE=Europe/Belgrade volumes: - n8n_data:/home/node/.n8n volumes: n8n_data: external: true
Replace n8n_data with the name of your volume, replace n8n.domain.com with your actual domain name.
Download latest N8N
cd ~/.n8n docker compose pull
Run N8N
cd ~/n8n docker compose up -d
Fix Caddy IP
The IP of docker container for N8N will change, if inside your Caddyfile you have an IP set, instead of a hostname, this need to be changed.
From docker ps get container ID for Caddy. Run:
docker exec -it <container_id> sh vi /etc/caddy/Caddyfile
Use this:
n8n.domain.com { reverse_proxy http://n8n:5678 { flush_interval -1 } }
Usually instead of n8n you will see the IP.
Put in n8n.Restart Caddy
docker restart <container_id>
Reconnect docker network
docker network create n8n_net docker network connect n8n_net <n8n_container_id> docker network connect n8n_net <caddy_container_id>
This should help update N8N that is running inside a docker on any setup 🙂
If you run into questions of issues you can leave a comment or contact me. Thanks.