Setting up a high-performance Linux environment on enterprise hardware isn’t just about installing an OS – it’s about precision, flexibility, and reliability. In this post, I’ll walk you through a real-world case study where I deployed Ubuntu 22.04 LTS on a ProLiant DL360 Gen9 server using mdadm-based software RAID 0 and a fully optimized UEFI boot setup.
The Challenge
My client needed a performant server for hosting CyberPanel and other web services. The server was equipped with 8x 4TB drives and required a RAID 0 configuration for maximum throughput. It also needed UEFI boot support and a fully customized minimal Ubuntu system for performance and control.
However, I ran into multiple real-world obstacles:
Default Ubuntu installer doesn’t support RAID 0 configuration natively.
UEFI boot and GRUB issues on software RAID.
Network not coming up post-installation.
Emergency mode boot loops due to missing base packages
The Solution (Step-by-Step)
Note: Use rescue system or Live CD
Step 1: Disk Partitioning
Each of the 8 drives was partitioned identically using parted:
echo 'auto lo
iface lo inet loopback
auto eno49
iface eno49 inet static
address 178.222.247.237
netmask 255.255.255.128
gateway 178.222.247.1
dns-nameservers 1.1.1.1 8.8.8.8' > /etc/network/interfaces
The system now boots cleanly from /dev/md0, detects the RAID volume immediately via initramfs, and launches a fully minimal but extendable Ubuntu 22.04 environment ready for CyberPanel.
Boot time was optimized, all legacy EFI entries were removed, and the network stack is stable and persistent across reboots.
If you’re stuck with an ADSL connection, you’ve probably experienced random disconnections, unstable speeds, or the dreaded CRC error spikes that cripple your internet. The worst part? These issues often happen while the connection is technically “up,” but completely unusable.
Instead of manually restarting the modem every time the connection gets unstable, I decided to automate the entire process using a lightweight Bash + Expect script. Now, my modem resets itself when things start to go wrong — and my internet remains stable without me lifting a finger.
What Exactly Did I Build?
A script that monitors CRC errors in real time by connecting to the modem over Telnet.
When the number of CRC errors exceeds a configurable threshold (e.g., 100 errors), the script automatically sends a reset command to the modem.
All actions are logged with timestamps for easy review.
The script runs continuously and checks the connection every few seconds.
Why Is This Important?
CRC errors on ADSL lines often don’t increase gradually.
Instead, the error count can explode into the thousands within minutes, leading to massive packet loss and unusable internet — while your modem still shows as “connected.”
ISPs usually lock your line speed and don’t dynamically adjust it when signal quality degrades.
Manual resets become a constant annoyance… unless you automate them.
How Does It Work?
The script uses expect to reliably automate Telnet sessions with the modem. It:
Logs into the modem using Telnet.
Retrieves the current CRC error count.
If the error count crosses the defined limit, sends the wan adsl reset command.
Using Varnish Cache in front of a WordPress site can drastically improve performance, reduce server load, and serve content lightning fast — especially when paired with NGINX and Cloudflare. Below is a complete and production-ready setup, including all configuration files and best practices.
Requirements
A Linux server (Ubuntu/Debian/CentOS)
WordPress installed and working
NGINX (used as SSL terminator + PHP backend)
PHP-FPM (for dynamic content)
Varnish installed
Cloudflare (optional but supported)
1. Varnish Configuration (default.vcl)
Location: usually in /etc/varnish/default.vcl Varnish Port: here we use :6081 for external and :6216 for internal access
vcl 4.0;
import std;
import proxy;
# Define individual backends
backend default {
.host = "127.0.0.1";
.port = "6216"; # Use 80 after SSL termination
}
# Add hostnames, IP addresses and subnets that are allowed to purge content
acl purge {
"YOUR_SERVER_IP";
"173.245.48.0"/20; # Cloudflare IPs
"103.21.244.0"/22; # Cloudflare IPs
"103.22.200.0"/22; # Cloudflare IPs
"103.31.4.0"/22; # Cloudflare IPs
"141.101.64.0"/18; # Cloudflare IPs
"108.162.192.0"/18; # Cloudflare IPs
"190.93.240.0"/20; # Cloudflare IPs
"188.114.96.0"/20; # Cloudflare IPs
"197.234.240.0"/22; # Cloudflare IPs
"198.41.128.0"/17; # Cloudflare IPs
"162.158.0.0"/15; # Cloudflare IPs
"104.16.0.0"/13; # Cloudflare IPs
"104.24.0.0"/14; # Cloudflare IPs
"172.64.0.0"/13; # Cloudflare IPs
"131.0.72.0"/22; # Cloudflare IPs
}
sub vcl_recv {
set req.backend_hint = default;
# Remove empty query string parameters
# e.g.: www.example.com/index.html?
if (req.url ~ "\?$") {
set req.url = regsub(req.url, "\?$", "");
}
# Remove port number from host header
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");
# Sorts query string parameters alphabetically for cache normalization purposes
set req.url = std.querysort(req.url);
# Remove the proxy header to mitigate the httpoxy vulnerability
# See https://httpoxy.org/
unset req.http.proxy;
# Add X-Forwarded-Proto header when using https
if (!req.http.X-Forwarded-Proto) {
if(std.port(server.ip) == 443 || std.port(server.ip) == 8443) {
set req.http.X-Forwarded-Proto = "https";
} else {
set req.http.X-Forwarded-Proto = "http";
}
}
# Purge logic to remove objects from the cache.
# Tailored to the Proxy Cache Purge WordPress plugin
# See https://wordpress.org/plugins/varnish-http-purge/
if(req.method == "PURGE") {
if(!client.ip ~ purge) {
return(synth(405,"PURGE not allowed for this IP address"));
}
if (req.http.X-Purge-Method == "regex") {
ban("obj.http.x-url ~ " + req.url + " && obj.http.x-host == " + req.http.host);
return(synth(200, "Purged"));
}
ban("obj.http.x-url == " + req.url + " && obj.http.x-host == " + req.http.host);
return(synth(200, "Purged"));
}
# Only handle relevant HTTP request methods
if (
req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "PATCH" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE"
) {
return (pipe);
}
# Remove tracking query string parameters used by analytics tools
if (req.url ~ "(\?|&)(_branch_match_id|_bta_[a-z]+|_bta_c|_bta_tid|_ga|_gl|_ke|_kx|campid|cof|customid|cx|dclid|dm_i|ef_id|epik|fbclid|gad_source|gbraid|gclid|gclsrc|gdffi|gdfms|gdftrk|hsa_acc|hsa_ad|hsa_cam|hsa_grp|hsa_kw|hsa_mt|hsa_net|hsa_src|hsa_tgt|hsa_ver|ie|igshid|irclickid|matomo_campaign|matomo_cid|matomo_content|matomo_group|matomo_keyword|matomo_medium|matomo_placement|matomo_source|mc_[a-z]+|mc_cid|mc_eid|mkcid|mkevt|mkrid|mkwid|msclkid|mtm_campaign|mtm_cid|mtm_content|mtm_group|mtm_keyword|mtm_medium|mtm_placement|mtm_source|nb_klid|ndclid|origin|pcrid|piwik_campaign|piwik_keyword|piwik_kwd|pk_campaign|pk_keyword|pk_kwd|redirect_log_mongo_id|redirect_mongo_id|rtid|s_kwcid|sb_referer_host|sccid|si|siteurl|sms_click|sms_source|sms_uph|srsltid|toolid|trk_contact|trk_module|trk_msg|trk_sid|ttclid|twclid|utm_[a-z]+|utm_campaign|utm_content|utm_creative_format|utm_id|utm_marketing_tactic|utm_medium|utm_source|utm_source_platform|utm_term|vmcid|wbraid|yclid|zanpid)=") {
set req.url = regsuball(req.url, "(_branch_match_id|_bta_[a-z]+|_bta_c|_bta_tid|_ga|_gl|_ke|_kx|campid|cof|customid|cx|dclid|dm_i|ef_id|epik|fbclid|gad_source|gbraid|gclid|gclsrc|gdffi|gdfms|gdftrk|hsa_acc|hsa_ad|hsa_cam|hsa_grp|hsa_kw|hsa_mt|hsa_net|hsa_src|hsa_tgt|hsa_ver|ie|igshid|irclickid|matomo_campaign|matomo_cid|matomo_content|matomo_group|matomo_keyword|matomo_medium|matomo_placement|matomo_source|mc_[a-z]+|mc_cid|mc_eid|mkcid|mkevt|mkrid|mkwid|msclkid|mtm_campaign|mtm_cid|mtm_content|mtm_group|mtm_keyword|mtm_medium|mtm_placement|mtm_source|nb_klid|ndclid|origin|pcrid|piwik_campaign|piwik_keyword|piwik_kwd|pk_campaign|pk_keyword|pk_kwd|redirect_log_mongo_id|redirect_mongo_id|rtid|s_kwcid|sb_referer_host|sccid|si|siteurl|sms_click|sms_source|sms_uph|srsltid|toolid|trk_contact|trk_module|trk_msg|trk_sid|ttclid|twclid|utm_[a-z]+|utm_campaign|utm_content|utm_creative_format|utm_id|utm_marketing_tactic|utm_medium|utm_source|utm_source_platform|utm_term|vmcid|wbraid|yclid|zanpid)=[-_A-z0-9+(){}%.*]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Only cache GET and HEAD requests
if (req.method != "GET" && req.method != "HEAD") {
set req.http.X-Cacheable = "NO:REQUEST-METHOD";
return(pass);
}
# Mark static files with the X-Static-File header, and remove any cookies
# X-Static-File is also used in vcl_backend_response to identify static files
if (req.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|ogg|ogm|opus|otf|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
set req.http.X-Static-File = "true";
unset req.http.Cookie;
return(hash);
}
# No caching of special URLs, logged in users and some plugins
if (
req.http.Cookie ~ "wordpress_(?!test_)[a-zA-Z0-9_]+|wp-postpass|comment_author_[a-zA-Z0-9_]+|woocommerce_cart_hash|woocommerce_items_in_cart|wp_woocommerce_session_[a-zA-Z0-9]+|wordpress_logged_in_|comment_author|PHPSESSID" ||
req.http.Authorization ||
req.url ~ "add_to_cart" ||
req.url ~ "edd_action" ||
req.url ~ "nocache" ||
req.url ~ "^/addons" ||
req.url ~ "^/bb-admin" ||
req.url ~ "^/bb-login.php" ||
req.url ~ "^/bb-reset-password.php" ||
req.url ~ "^/cart" ||
req.url ~ "^/checkout" ||
req.url ~ "^/control.php" ||
req.url ~ "^/login" ||
req.url ~ "^/logout" ||
req.url ~ "^/lost-password" ||
req.url ~ "^/my-account" ||
req.url ~ "^/product" ||
req.url ~ "^/register" ||
req.url ~ "^/register.php" ||
req.url ~ "^/server-status" ||
req.url ~ "^/signin" ||
req.url ~ "^/signup" ||
req.url ~ "^/stats" ||
req.url ~ "^/wc-api" ||
req.url ~ "^/wp-admin" ||
req.url ~ "^/wp-comments-post.php" ||
req.url ~ "^/wp-cron.php" ||
req.url ~ "^/wp-login.php" ||
req.url ~ "^/wp-activate.php" ||
req.url ~ "^/wp-mail.php" ||
req.url ~ "^/wp-login.php" ||
req.url ~ "^\?add-to-cart=" ||
req.url ~ "^\?wc-api=" ||
req.url ~ "^/preview=" ||
req.url ~ "^/\.well-known/acme-challenge/"
) {
set req.http.X-Cacheable = "NO:Logged in/Got Sessions";
if(req.http.X-Requested-With == "XMLHttpRequest") {
set req.http.X-Cacheable = "NO:Ajax";
}
return(pass);
}
# Cache pages with cookies for non-personalized content
if (!req.http.Cookie || req.http.Cookie ~ "wmc_ip_info|wmc_current_currency|wmc_current_currency_old") {
return(hash); # Cache the response despite cookies
}
# Remove any cookies left
unset req.http.Cookie;
return(hash);
}
sub vcl_hash {
if(req.http.X-Forwarded-Proto) {
# Create cache variations depending on the request protocol
hash_data(req.http.X-Forwarded-Proto);
}
}
sub vcl_backend_response {
# Inject URL & Host header into the object for asynchronous banning purposes
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
# If we dont get a Cache-Control header from the backend
# we default to 1h cache for all objects
if (!beresp.http.Cache-Control) {
set beresp.ttl = 1h;
set beresp.http.X-Cacheable = "YES:Forced";
}
# If the file is marked as static we cache it for 1 day
if (bereq.http.X-Static-File == "true") {
unset beresp.http.Set-Cookie;
set beresp.http.X-Cacheable = "YES:Forced";
set beresp.ttl = 1d;
}
# Remove the Set-Cookie header when a specific Wordfence cookie is set
if (beresp.http.Set-Cookie ~ "wfvt_|wordfence_verifiedHuman") {
unset beresp.http.Set-Cookie;
}
if (beresp.http.Set-Cookie) {
set beresp.http.X-Cacheable = "NO:Got Cookies";
} elseif(beresp.http.Cache-Control ~ "private") {
set beresp.http.X-Cacheable = "NO:Cache-Control=private";
}
}
sub vcl_deliver {
# Debug header
if(req.http.X-Cacheable) {
set resp.http.X-Cacheable = req.http.X-Cacheable;
} elseif(obj.uncacheable) {
if(!resp.http.X-Cacheable) {
set resp.http.X-Cacheable = "NO:UNCACHEABLE";
}
} elseif(!resp.http.X-Cacheable) {
set resp.http.X-Cacheable = "YES";
}
# Cleanup of headers
unset resp.http.x-url;
unset resp.http.x-host;
}
server {
listen 6216;
listen [::]:6216;
server_name localhost;
{{root}}
try_files $uri $uri/ /index.php?$args;
index index.php index.html;
location ~ \.php$ {
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
try_files $uri =404;
fastcgi_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_param HTTPS "on";
fastcgi_param SERVER_PORT 443;
fastcgi_pass 127.0.0.1:{{php_fpm_port}};
fastcgi_param PHP_VALUE "{{php_settings}}";
}
# Static files handling
location ~* ^.+\.(css|js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|woff2|eot|mp4|ogg|ogv|webm|webp|zip|swf|map)$ {
add_header Access-Control-Allow-Origin "*";
expires max;
access_log off;
}
location /.well-known/traffic-advice {
types { } default_type "application/trafficadvice+json; charset=utf-8";
}
if (-f $request_filename) {
break;
}
}
🔁 Replace YOUR_SERVER_IP with your actual public server IP or realip passed by Cloudflare.
🔁 Add or replace /my-account and others, if it has been translated or changed
The vcl_recv, vcl_backend_response, vcl_deliver sections include:
If you’re using WP Rocket and want to drastically improve performance, one of the best tricks is to bypass WordPress and PHP entirely when a static cache file exists. While Rocket-Nginx offers this for NGINX servers, Apache users can achieve the same result using smart .htaccess rules.
Here’s how to do it.
The Problem
By default, WP Rocket cache is generated into:
wp-content/cache/wp-rocket/your-domain.com/
But Apache doesn’t know to check there first. So every request, even if cached, still hits WordPress and PHP — wasting resources.
The Solution: .htaccess Rewrite Rules
You can use .htaccess to check if a static cached file exists and serve it immediately, skipping WordPress completely.
Add this to your root .htaccess file:
<IfModule mod_mime.c>
AddEncoding gzip .html_gzip
AddType text/html .html_gzip
</IfModule>
<IfModule mod_headers.c>
<FilesMatch "\.html_gzip$">
Header set Content-Encoding gzip
Header set Content-Type "text/html; charset=UTF-8"
</FilesMatch>
</IfModule>
<IfModule mod_rewrite.c>
RewriteEngine On
# Define cache folder
RewriteCond %{HTTP_HOST} ^(www\.)?(.+)$ [NC]
RewriteRule .* - [E=HOST:%2]
# Serve gzipped cache if supported and exists
RewriteCond %{REQUEST_METHOD} GET
RewriteCond %{HTTP_COOKIE} !(comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in) [NC]
RewriteCond %{HTTPS} on
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html_gzip -f
RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html_gzip [L]
# Serve HTTPS non-gzipped cache if exists
RewriteCond %{HTTPS} on
RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html -f
RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index-https.html [L]
# Serve HTTP gzipped cache if supported and exists
RewriteCond %{REQUEST_METHOD} GET
RewriteCond %{HTTP_COOKIE} !(comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in) [NC]
RewriteCond %{HTTPS} off
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html_gzip -f
RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html_gzip [L]
# Serve HTTP non-gzipped cache if exists
RewriteCond %{HTTPS} off
RewriteCond %{DOCUMENT_ROOT}/wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html -f
RewriteRule .* /wp-content/cache/wp-rocket/%{ENV:HOST}/%{REQUEST_URI}/index.html [L]
</IfModule>
Result
With these rules:
Visitors get blazing fast static HTML delivery
WordPress and PHP are completely bypassed if cache exists
Resource usage drops and TTFB improves dramatically
This is the closest you can get to Rocket-Nginx behavior on Apache, with zero plugins or server-level modules required.
Date of Incident: May 5, 2025 Affected System: AlmaLinux 9.5 with cPanel & PostgreSQL
Incident Summary
A Linux server running cPanel with PostgreSQL was compromised through a misconfigured PostgreSQL service, which allowed an attacker to upload and execute a malicious binary called cpu_hu. This ELF executable is part of a known crypto mining malware campaign, which abuses PostgreSQL’s permissions to spawn unauthorized processes.
Crontab entries for postgres user triggering binary execution
Audit logs showing executions of /usr/bin/s-nail, /usr/sbin/sendmail, and /usr/sbin/exim with UID 26 (PostgreSQL user)
Kernel logs in /var/log/messages showing:Killing process <PID> (cpu_hu) with signal SIGKILL
PostgreSQL failing to restart due to improper permissions after remediation
Attack Vector & Execution
The attacker exploited a misconfigured PostgreSQL installation with either:
trust authentication enabled
publicly accessible port 5432
the ability to execute arbitrary shell commands via SQL extensions like COPY TO PROGRAM
Once inside, the attacker used the postgres user to:
Upload the ELF binary
Schedule its execution via cron
Containment & Mitigation Steps
Step 1: Kill Malware Processes
pkill -f cpu_hu
Step 2: Remove Binary
find / -type f -name '*cpu_hu*' -delete
Step 3: Clean Up Crontab
crontab -u postgres -r
Step 4: Lock Down PostgreSQL
Edit postgresql.conf and add:
session_preload_libraries = ''
Restart service.
-- Inside psql:
ALTER USER postgres PASSWORD 'new-strong-password';
REVOKE EXECUTE ON FUNCTION pg_ls_dir(text) FROM PUBLIC;
REVOKE EXECUTE ON FUNCTION pg_read_file(text) FROM PUBLIC;
REVOKE EXECUTE ON FUNCTION pg_stat_file(text) FROM PUBLIC;
When a cPanel server experiences file permission issues-after a migration, manual file operations, or a misbehaving script-websites may become inaccessible, emails may fail, or security might be at risk. This script automates the process of fixing file ownership and permissions for one or more cPanel users, ensuring everything is back to a secure and functional state.
Use Case
You may need to run this script when:
Website files show 403 Forbidden errors
Email delivery fails due to etc/ permissions
Files were copied or restored without --preserve flags
CageFS directories have incorrect modes
How to run the script
for i in `ls -A /var/cpanel/users` ; do ./fixperms $i ; done
The Script (save as ./fixperms and chmod +x fixperms)
#!/bin/bash
# Script to fix permissions and ownerships for one or more cPanel users
if [ "$#" -lt "1" ]; then
echo "Must specify at least one user"
exit 1
fi
USERS=$@
for user in $USERS; do
HOMEDIR=$(getent passwd "$user" | cut -d: -f6)
if [ ! -f /var/cpanel/users/"$user" ]; then
echo "User file missing for $user, skipping"
continue
elif [ -z "$HOMEDIR" ]; then
echo "Could not determine home directory for $user, skipping"
continue
fi
echo "Fixing ownership and permissions for $user"
# Ownership
chown -R "$user:$user" "$HOMEDIR" >/dev/null 2>&1
chmod 711 "$HOMEDIR" >/dev/null 2>&1
chown "$user:nobody" "$HOMEDIR/public_html" "$HOMEDIR/.htpasswds" 2>/dev/null
chown "$user:mail" "$HOMEDIR/etc" "$HOMEDIR/etc/"*/shadow "$HOMEDIR/etc/"*/passwd 2>/dev/null
# File permissions (parallel)
find "$HOMEDIR" -type f -print0 2>/dev/null | xargs -0 -P4 chmod 644 2>/dev/null
find "$HOMEDIR" -type d ! -name cgi-bin -print0 2>/dev/null | xargs -0 -P4 chmod 755 2>/dev/null
find "$HOMEDIR" -type d -name cgi-bin -print0 2>/dev/null | xargs -0 -P4 chmod 755 2>/dev/null
chmod 750 "$HOMEDIR/public_html" 2>/dev/null
# CageFS fixes
if [ -d "$HOMEDIR/.cagefs" ]; then
chmod 775 "$HOMEDIR/.cagefs" 2>/dev/null
chmod 700 "$HOMEDIR/.cagefs/tmp" "$HOMEDIR/.cagefs/var" 2>/dev/null
chmod 777 "$HOMEDIR/.cagefs/cache" "$HOMEDIR/.cagefs/run" 2>/dev/null
fi
done
When a cPanel server experiences catastrophic failure without any valid backups, restoring websites and databases manually becomes the only option. In my case, the server had completely failed and could only be accessed via a rescue environment. No backups were available in /backup, and the system was non-bootable due to critical library corruption.
To recover, I mounted the failed system, manually transferred essential directories such as /var/lib/mysql and /home to a freshly installed cPanel server using rsync, and fixed ownership and permissions. This restored websites and database files physically, but cPanel/WHM did not recognize the MySQL databases or users.
Problem: cPanel Doesn’t Recognize Existing MySQL Databases
Although the database folders were correctly placed in /var/lib/mysql/ and all MySQL users were present in the mysql.user table, cPanel GUI showed no databases or users associated with any account.
This is expected behavior — cPanel stores mappings between accounts, databases, and MySQL users in its own internal metadata files, which were not recoverable.
Solution: Rebuild cPanel MySQL Mapping Using dbmaptool
To restore MySQL database and user associations for each cPanel account without recreating them manually, I used the official cPanel utility:
/usr/local/cpanel/bin/dbmaptool
I created a script that:
Loops through all cPanel users (found in /var/cpanel/users)
For each user, finds all MySQL databases starting with the user’s prefix (e.g. user_db1)
Finds all MySQL users belonging to that prefix (e.g. user_dbuser1)
Automatically maps them using dbmaptool
#!/bin/bash
for user in $(ls /var/cpanel/users); do
dbs=$(mysql -N -e "SHOW DATABASES LIKE '${user}\_%';" | tr '\n' ',' | sed 's/,\$//')
dbusers=$(mysql -N -e "SELECT User FROM mysql.user WHERE User LIKE '${user}\_%';" | tr '\n' ',' | sed 's/,\$//')
if [[ -n "$dbs" || -n "$dbusers" ]]; then
echo "Mapping for user: $user"
/usr/local/cpanel/bin/dbmaptool "$user" --type 'mysql' --dbs "$dbs" --dbusers "$dbusers"
fi
done
This forced cPanel to reload and re-index the updated metadata, and all previously invisible databases and MySQL users reappeared in the cPanel UI for each respective account.
Even in total system failure scenarios with no backups, if the /home and /var/lib/mysql directories are intact and MySQL users are present, it’s entirely possible to recover a cPanel environment manually. The key is to re-establish metadata associations using dbmaptool, which tells cPanel which databases and users belong to which accounts.
U poslednje vreme sve češće se na društvenim mrežama pojavljuju objave koje povezuju vejpovanje sa tzv. „popcorn plućima“. Nažalost, većina tih objava je zasnovana na zastarelim i pogrešno interpretiranim informacijama koje više nemaju nikakvu težinu u savremenom kontekstu.
Šta su zapravo “popcorn pluća”?
„Popcorn pluća“ je kolokvijalni naziv za bronhiolitis obliterans, retko i ozbiljno oboljenje pluća koje je svojevremeno povezano sa hemikalijom diacetil. Diacetil je korišćen kao aroma u prehrambenoj industriji (npr. za ukus putera u kokicama), ali je udisanje visokih koncentracija ove supstance povezano sa oštećenjem disajnih puteva.
Kako je diacetil povezan sa e-cigaretama?
Još 2014. godine, jedno istraživanje otkrilo je da se diacetil nalazi u oko 70% ispitivanih e-tečnosti. To je izazvalo zabrinutost — s razlogom. Međutim, industrija je brzo reagovala: od 2018. godine, gotovo nijedna renomirana kompanija ne koristi diacetil u svojim tečnostima.
Kanadska studija iz 2021. testirala je 825 različitih e-cigareta. Rezultat? Samo dva proizvoda su sadržala diacetil, i nijedan od njih nije proizveden posle 2018.
Cigarete su daleko opasnije
Zanimljivo je da klasične cigarete sadrže znatno više diacetila nego e-cigarete ikada. Ipak, retko ko pominje to kada treba demonizovati vejp.
Drugim rečima: ako si do sada pušio cigarete, pa prešao na vejp — već si uradio ogroman korak za svoje zdravlje.
Šta se desilo sa onima koji su završili u bolnici?
Veliki broj slučajeva koji su završili u bolnici, naročito u SAD, nije imao veze sa nikotinskim tečnostima, već sa nelegalnim THC tečnostima sa crnog tržišta. Te supstance često sadrže vitamin E acetat, koji je izuzetno opasan za pluća kada se udiše. Dakle — nije problem u vejpu, već u neproverenim, ilegalnim proizvodima.
Zaključak: Veći je rizik od senzacionalizma nego od samog vejpa
Ako koristiš proverene tečnosti od ozbiljnih proizvođača i ne kupuješ „ulicu u boci“, nemaš razloga za paniku. Vejping nije bez rizika, ali je neuporedivo manje štetan od pušenja. I što je najvažnije — mit o „popcorn plućima“ je naučno razobličen i više nema mesta u ozbiljnoj raspravi.
If you’re managing a Plesk server with multiple domains and want to enable DKIM signing for all of them quickly, this guide is for you.
Why DKIM Matters
DKIM (DomainKeys Identified Mail) helps prevent email spoofing by attaching a cryptographic signature to outgoing messages. Without DKIM, your emails are more likely to land in spam or be rejected entirely — especially by strict providers like Gmail, Outlook, or Yahoo.
The CLI Way: Enable DKIM for All Domains in Seconds
for domain in $(plesk bin domain --list); do
plesk bin site -u "$domain" -mail_service true;
plesk bin domain_pref --update "$domain" -sign_outgoing_mail true;
done
What this script does:
Lists all domains on your Plesk server.
Ensures that the mail service is enabled for each domain.
Enables DKIM signing for outgoing emails on each domain.
If you don’t use Plesk for DNS (e.g., you’re on Cloudflare, Route53, or external nameservers), you’ll need to manually copy each domain’s DKIM public key to its DNS zone.
Enabling DNS for all domains:
for domain in $(plesk bin domain --list); do
echo "Enabling DNS for $domain"
plesk bin site -u "$domain" -dns true
done
Before running this command temporarily disable “Use the serial number format recommended by IETF and RIPE” in Tools & Settings > DNS Settings > Zone Settings Template