Deploy MySQL and Xtrabackup in Docker
Deploying and managing a robust MySQL environment in modern applications often demands both reliability and ease of automation.
Docker provides an ideal platform to achieve this, allowing you to encapsulate your MySQL database along with backup tools
like Percona XtraBackup into isolated, reproducible containers. In this guide, I’ll walk you through a fully organized
deployment setup where MySQL runs in Docker alongside XtraBackup, enabling full and incremental backups, binlog archiving,
and metadata tracking — all in a structured, maintainable directory layout.
The architecture follows a clear separation of concerns:
```
/opt/mysql-stack/
├── mysql/
│ ├── conf.d/ # MySQL configuration files
│ ├── data/ # Persistent MySQL data
│ └── logs/ # MySQL server logs
│
├── xtrabackup/
│ ├── scripts/ # Backup scripts (full, incremental, restore, etc.)
│
├── backups/
│ ├── full/ # Full backups
│ ├── incr/ # Incremental backups
│ ├── binlogs/ # Archived binary logs
│ └── chaininfo/ # Metadata for backup chains
│
└── docker/
├── docker-compose.yml
├── Xtrabackup.Dockerfile
└── .env # Environment variables for container configuration
```
This structure ensures that each component — the MySQL database, backup tooling, and the backup storage — is modular and maintainable.
It simplifies tasks like restoring a database from a backup chain, applying incremental backups, or managing binlog retention.
By the end of this article, you’ll have a fully containerized MySQL stack with automated backup management,
ready for production-grade deployments — all while keeping the file system clean, organized, and easy to navigate.
Docker Environment Preparation
sudo dnf update -y
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
Verify the Installation (Run Hello-World):
docker run hello-world
Deployment Procedure
Once the directory structure and scripts are prepared, the full MySQL + XtraBackup environment can be deployed in just a few steps.
Below is the exact procedure used to bring the stack online, initialize backup automation, and verify that everything works end-to-end.
Step 1 — Upload the scripts
Upload all provided scripts (backup scripts, entrypoint logic, configuration files, etc.) to a working directory under your user’s home directory.
These scripts will later be used to populate the `/opt/mysql-stack/` directory structure.
Step 2 — Run the structure creation script
This step creates the full directory layout and copies all required files into place.
sudo bash create_structure.sh
Step 3 — Deploy MySQL and XtraBackup containers
Navigate to the Docker directory and start the stack:
cd /opt/mysql-stack/docker
sudo docker compose up -d
sudo docker ps
This launches:
* mysql — the main database server
* xtrabackup — the dedicated backup/restore/PITR container
Step 4 — Monitor container logs
Before proceeding, confirm both containers initialized successfully:
docker logs -f mysql
docker logs -f xtrabackup
You should see no errors and MySQL should report `ready for connections`.
Step 5 — Create the required MySQL users
Enter the MySQL container:
docker exec -it mysql bash
Inside the container, create the `xtraback` backup user with the necessary privileges:
mysql -uroot -proot@123
CREATE USER 'xtraback'@'%' IDENTIFIED BY 'xtraback@123';
GRANT ALL PRIVILEGES ON *.* TO 'xtraback'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
These permission enable XtraBackup to perform consistent hot backups.
Step 6 — Generate test data and run manual backups
To validate the setup:
docker exec xtrabackup /scripts/full_backup.sh
docker exec xtrabackup /scripts/incr_backup.sh
docker exec xtrabackup /scripts/archive_binlogs.sh
After running these commands, check:
* `/opt/mysql-stack/backups/full/`
* `/opt/mysql-stack/backups/incr/`
* `/opt/mysql-stack/backups/binlogs/`
to confirm that backups were successfully generated.
Step 7 — Test automated scheduled backups
To test cron-driven backups at a higher frequency, modify the `.env` file:
nano /opt/mysql-stack/docker/.env
Restart only the XtraBackup container:
docker compose down xtrabackup
docker compose up -d
docker logs -f xtrabackup
Cron inside the backup container will now run at the newly configured intervals.
Deployment Scripts:
✅ SCRIPT 1 — Directory Structure Creation Script
Use this to prepare the filesystem.
Path: `/user_home_dir/create_structure`
#!/bin/bash
#---------------------------------------------------------------------------------------------------------------------------------------#
# This script creates the full directory structure required for MySQL and XtraBackup in a Docker environment. #
# If the script files exist in the directory where this script is executed, they will be copied to their designated destination paths. #
# If any script file is missing, an empty placeholder file will be created in the appropriate destination directory. #
#---------------------------------------------------------------------------------------------------------------------------------------#
set -euo pipefail
BASE_DIR="/opt/mysql-stack"
echo "[INFO] Creating directory structure..."
mkdir -p "$BASE_DIR"/{mysql/conf.d,mysql/data,mysql/logs}
mkdir -p "$BASE_DIR"/xtrabackup/scripts
mkdir -p "$BASE_DIR"/backups/{full,incr,binlogs,chaininfo}
mkdir -p "$BASE_DIR"/docker
echo "[INFO] Synchronizing required files (copy if exists, create if missing)..."
sync_file() {
local src="$1"
local dest="$2"
mkdir -p "$(dirname "$dest")" # ensure folder exists
if [[ -f "$src" ]]; then
cp "$src" "$dest"
echo "[OK] Copied: $src → $dest"
else
touch "$dest"
echo "[WARN] Missing $src → created empty file at $dest"
fi
}
sync_file "./.env" "$BASE_DIR/docker/.env"
sync_file "./docker-compose.yml" "$BASE_DIR/docker/docker-compose.yml"
sync_file "./xtrabackup.Dockerfile" "$BASE_DIR/docker/xtrabackup.Dockerfile"
sync_file "./my.cnf" "$BASE_DIR/mysql/conf.d/my.cnf"
sync_file "./config.sh" "$BASE_DIR/xtrabackup/scripts/config.sh"
sync_file "./entrypoint.sh" "$BASE_DIR/xtrabackup/scripts/entrypoint.sh"
sync_file "./full_backup.sh" "$BASE_DIR/xtrabackup/scripts/full_backup.sh"
sync_file "./incr_backup.sh" "$BASE_DIR/xtrabackup/scripts/incr_backup.sh"
sync_file "./archive_binlogs.sh" "$BASE_DIR/xtrabackup/scripts/archive_binlogs.sh"
sync_file "./apply_binlogs.sh" "$BASE_DIR/xtrabackup/scripts/apply_binlogs.sh"
sync_file "./apply_binlogs_max_pitr.sh" "$BASE_DIR/xtrabackup/scripts/apply_binlogs_max_pitr.sh"
sync_file "./restore_backup.sh" "$BASE_DIR/xtrabackup/scripts/restore_backup.sh"
sync_file "./retention.sh" "$BASE_DIR/xtrabackup/scripts/retention.sh"
echo "[INFO] Setting permissions..."
if compgen -G "$BASE_DIR/xtrabackup/scripts/*.sh" > /dev/null; then
chmod +x "$BASE_DIR"/xtrabackup/scripts/*.sh
Fi
chmod -R 755 "$BASE_DIR/xtrabackup/" || true
echo "[OK] All required files processed."
echo "[OK] Missing files were automatically created."
echo "[OK] Permissions set successfully."
✅ SCRIPT 2 — `.env` File
Path: `/opt/mysql-stack/docker/.env`
# ---------------- MySQL Settings ----------------
MYSQL_ROOT_PASSWORD=root@123
MYSQL_DATABASE=appdb
MYSQL_USER=appuser
MYSQL_PASSWORD=appuser@123
MYSQL_PORT=3306
# ---------------- XtraBackup Settings ----------------
XTRABACKUP_USER=xtraback
XTRABACKUP_PASSWORD=xtraback@123
# ---------------- Backup Schedules ----------------
FULL_BACKUP_CRON="*/30 * * * *"
INCR_BACKUP_CRON="*/10 * * * *"
BINLOG_ARCHIVE_CRON="*/5 * * * *"
RETENTION_CRON="*/60 * * * *"
# ---------------- Backup Directories ----------------
BACKUP_BASE=/backups
FULL_DIR=${BACKUP_BASE}/full
INCR_DIR=${BACKUP_BASE}/incr
BINLOG_DIR=${BACKUP_BASE}/binlogs
CHAININFO_DIR=${BACKUP_BASE}/chaininfo
LOG_DIR=${BACKUP_BASE}/logs
✅ SCRIPT 3 — Docker Compose
Path: `/opt/mysql-stack/docker/docker-compose.yml`
version: "3.9"
services:
mysql:
image: mysql:8.0
container_name: mysql
restart: unless-stopped
ports:
- "${MYSQL_PORT}:3306"
env_file:
- ./.env
networks:
- db-net
environment:
- TZ=Asia/Dhaka
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "127.0.0.1", "-u", "root", "-p${MYSQL_ROOT_PASSWORD}"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- ../mysql/data:/var/lib/mysql:rw
- ../mysql/conf.d:/etc/mysql/conf.d
- ../mysql/logs:/var/log/mysql
xtrabackup:
build:
context: ..
dockerfile: docker/xtrabackup.Dockerfile
container_name: xtrabackup
restart: unless-stopped
depends_on:
- mysql
networks:
- db-net
environment:
- TZ=Asia/Dhaka
env_file:
- ./.env
volumes:
- ../mysql/data:/var/lib/mysql:rw
- ../backups:/backups:rw
- ../xtrabackup/scripts:/scripts
networks:
db-net:
driver: bridge
✅ SCRIPT 4 — Docker Xtrabackup container configuration
Path: `/opt/mysql-stack/docker/xtrabackup/xtrabackup.Dockerfile`
# Dockerfile.ubi
FROM percona/percona-xtrabackup:8.0
USER root
# Install cron + tzdata
RUN microdnf install -y cronie tzdata && \
microdnf clean all && \
rm -rf /var/cache/dnf /var/cache/yum /var/cache/microdnf
# Set default timezone
ENV TZ=Asia/Dhaka
RUN ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime && \
echo "Asia/Dhaka" > /etc/timezone
# Copy backup scripts
COPY xtrabackup/scripts/ /scripts/
RUN chmod +x /scripts/*.sh
ENTRYPOINT ["/scripts/entrypoint.sh"]
✅ SCRIPT 5 — MySQL Configuration
Path: `/opt/mysql-stack/mysql/conf.d/my.cnf`
[mysqld]
bind-address = 0.0.0.0
server_id = 1001
log_bin = binlog
binlog_format = ROW
binlog_expire_logs_seconds = 604800
innodb_buffer_pool_size = 2500M
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
max_binlog_size = 104857600
✅ SCRIPT 6 — Backup Config
Path: `/opt/mysql-stack/xtrabackup/scripts/config.sh`
#!/bin/bash
set -euo pipefail
# Load runtime env created by entrypoint
if [[ -f /etc/xtrabackup_env.sh ]]; then
. /etc/xtrabackup_env.sh
else
echo "[ERROR] /etc/xtrabackup_env.sh not found" >&2
exit 1
fi
# Timestamp for backups
DATE=$(date +"%Y%m%d-%H%M%S")
# MySQL host/port (can be overridden via .env)
MYSQL_HOST="${MYSQL_HOST:-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
# Ensure directories exist (again)
mkdir -p "$FULL_DIR" "$INCR_DIR" "$BINLOG_DIR" "$CHAININFO_DIR" "$LOG_DIR"
# Chain metadata files
CHAIN_FILE="$CHAININFO_DIR/chain.info"
LAST_FULL_FILE="$CHAININFO_DIR/LAST_FULL"
LAST_INCR_FILE="$CHAININFO_DIR/LAST_INCR"
✅ SCRIPT 7 — Entrypoint (Cron Daemon Runner)
Path: `/opt/mysql-stack/xtrabackup/scripts/entrypoint.sh`
#!/bin/bash
set -euo pipefail
echo "[INFO] Starting XtraBackup container entrypoint"
# --- Ensure backup directories exist ---
mkdir -p /backups/logs
mkdir -p "${FULL_DIR:-/backups/full}" "${INCR_DIR:-/backups/incr}" "${BINLOG_DIR:-/backups/binlogs}" "${CHAININFO_DIR:-/backups/chaininfo}" "${LOG_DIR:-/backups/logs}"
# --- Touch logs so tail won't fail later ---
touch /backups/logs/full_backup.log /backups/logs/incr_backup.log /backups/logs/archive_binlog.log /backups/logs/backup_retention.log
# --- Write safe env file for cron from current container env (only needed variables) ---
ENV_FILE="/etc/xtrabackup_env.sh"
echo "[INFO] Writing safe environment variables to $ENV_FILE for cron"
cat <<'EOF' > "$ENV_FILE"
# generated at container start
export MYSQL_HOST="${MYSQL_HOST:-mysql}"
export MYSQL_PORT="${MYSQL_PORT:-3306}"
export MYSQL_USER="${MYSQL_USER:-appuser}"
export MYSQL_PASSWORD="${MYSQL_PASSWORD:-appuser@123}"
export XTRABACKUP_USER="${XTRABACKUP_USER:-xtraback}"
export XTRABACKUP_PASSWORD="${XTRABACKUP_PASSWORD:-xtraback@123}"
export BACKUP_BASE="${BACKUP_BASE:-/backups}"
export FULL_DIR="${FULL_DIR:-${BACKUP_BASE}/full}"
export INCR_DIR="${INCR_DIR:-${BACKUP_BASE}/incr}"
export BINLOG_DIR="${BINLOG_DIR:-${BACKUP_BASE}/binlogs}"
export CHAININFO_DIR="${CHAININFO_DIR:-${BACKUP_BASE}/chaininfo}"
export LOG_DIR="${LOG_DIR:-${BACKUP_BASE}/logs}"
export TZ="${TZ:-Asia/Dhaka}"
# Cron schedule variables (use ones passed into container if present)
export FULL_BACKUP_CRON="${FULL_BACKUP_CRON:-30 2 * * *}"
export INCR_BACKUP_CRON="${INCR_BACKUP_CRON:-*/60 * * * *}"
export BINLOG_ARCHIVE_CRON="${BINLOG_ARCHIVE_CRON:-*/10 * * * *}"
export RETENTION_CRON="${RETENTION_CRON:-0 5 * * *}"
EOF
# Make file usable
chmod +x "$ENV_FILE"
# --- Install cron jobs (root) ---
CRON_TMP=$(mktemp)
{
echo "${FULL_BACKUP_CRON} bash -c '. $ENV_FILE; /scripts/full_backup.sh >> /backups/logs/full_backup.log 2>&1'"
echo "${INCR_BACKUP_CRON} bash -c '. $ENV_FILE; /scripts/incr_backup.sh >> /backups/logs/incr_backup.log 2>&1'"
echo "${BINLOG_ARCHIVE_CRON} bash -c '. $ENV_FILE; /scripts/archive_binlogs.sh >> /backups/logs/archive_binlog.log 2>&1'"
echo "${RETENTION_CRON} bash -c '. $ENV_FILE; /scripts/retention.sh >> /backups/logs/backup_retention.log 2>&1'"
} > "$CRON_TMP"
crontab "$CRON_TMP"
rm -f "$CRON_TMP"
echo "[INFO] Cron jobs installed:"
crontab -l || true
# --- Start cron daemon in background and keep container alive by tailing logs ---
echo "[INFO] Starting cron daemon in background..."
crond || { echo "[ERROR] crond failed to start"; exit 1; }
# tail logs so docker logs -f shows progress
tail -F /backups/logs/*.log
✅ SCRIPT 8 — Full backup
Path: `/opt/mysql-stack/xtrabackup/scripts/full_backup.sh`
#!/bin/bash
set -euo pipefail
. /scripts/config.sh
echo "[INFO] Starting FULL backup..."
TARGET="full_$DATE"
TARGET_DIR="$FULL_DIR/$TARGET"
# Ensure target directory exists
mkdir -p "$TARGET_DIR"
# Run full backup
xtrabackup --backup \
--target-dir="$TARGET_DIR" \
--host="$MYSQL_HOST" \
--port="$MYSQL_PORT" \
--user="$XTRABACKUP_USER" \
--password="$XTRABACKUP_PASSWORD" \
--parallel=4
# Persist chain metadata safely
echo "BASE_FULL=$TARGET" > "$CHAIN_FILE"
echo "LAST_INCR=" >> "$CHAIN_FILE"
echo "CHAIN_LIST=\"$TARGET\"" >> "$CHAIN_FILE"
# Write LAST files for restore convenience
echo "$TARGET_DIR" > "$LAST_FULL_FILE"
: > "$LAST_INCR_FILE" # empty
echo "[INFO] FULL backup completed: $TARGET_DIR"
echo "[INFO] Chain initialized: $TARGET"
✅ SCRIPT 9 — Incremental backup
Path: `/opt/mysql-stack/xtrabackup/scripts/incr_backup.sh`
#!/bin/bash
set -euo pipefail
. /scripts/config.sh
# Check that chain file exists
if [[ ! -f "$CHAIN_FILE" ]]; then
echo "[ERROR] No chain info exists. Run full backup first." >&2
exit 1
fi
# Load chain info safely
# shellcheck disable=SC1090
. "$CHAIN_FILE"
# Determine base directory for incremental
if [[ -n "${LAST_INCR:-}" ]]; then
BASE_DIR="$INCR_DIR/$LAST_INCR"
else
BASE_DIR="$FULL_DIR/$BASE_FULL"
fi
if [[ ! -d "$BASE_DIR" ]]; then
echo "[ERROR] Base directory for incremental not found: $BASE_DIR" >&2
exit 1
fi
# Create new incremental backup directory
TARGET="incr_$DATE"
TARGET_DIR="$INCR_DIR/$TARGET"
mkdir -p "$TARGET_DIR"
echo "[INFO] Starting INCREMENTAL backup..."
echo "[INFO] Base directory: $BASE_DIR"
xtrabackup --backup \
--target-dir="$TARGET_DIR" \
--incremental-basedir="$BASE_DIR" \
--host="$MYSQL_HOST" \
--port="$MYSQL_PORT" \
--user="$XTRABACKUP_USER" \
--password="$XTRABACKUP_PASSWORD" \
--parallel=4
# Update chain info safely
LAST_INCR="$TARGET"
if [[ -z "${CHAIN_LIST:-}" ]]; then
CHAIN_LIST="$TARGET"
else
CHAIN_LIST="$CHAIN_LIST $TARGET"
fi
echo "BASE_FULL=$BASE_FULL" > "$CHAIN_FILE"
echo "LAST_INCR=$LAST_INCR" >> "$CHAIN_FILE"
echo "CHAIN_LIST=\"$CHAIN_LIST\"" >> "$CHAIN_FILE" # <-- QUOTED!
# Update LAST_INCR file used by restore helper
echo "$TARGET" > "$LAST_INCR_FILE"
echo "[INFO] INCREMENTAL backup completed: $TARGET_DIR"
echo "[INFO] Chain updated: $CHAIN_LIST"
✅ SCRIPT 10 — Archive binary logs
Path: `/opt/mysql-stack/xtrabackup/scripts/arhchive_binlogs.sh`
#!/bin/bash
set -euo pipefail
. /scripts/config.sh
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
echo "------------------------------------------------------------"
echo "[INFO] Binlog Archiving Started at: $TIMESTAMP"
echo "------------------------------------------------------------"
shopt -s nullglob
BINLOG_SRC_DIR="/var/lib/mysql"
# 1) Copy numeric binlog files (binlog.000001 etc.) if not already archived
for f in "$BINLOG_SRC_DIR"/binlog.[0-9]*; do
base=$(basename "$f")
if [[ ! -f "$BINLOG_DIR/$base" ]]; then
COPY_TIME=$(date +"%Y-%m-%d %H:%M:%S")
echo "[INFO] ($COPY_TIME) Archiving new binlog: $base"
cp -va -- "$f" "$BINLOG_DIR/$base"
fi
done
# 2) Copy the index file to reflect latest changes
if [[ -f "$BINLOG_SRC_DIR/binlog.index" ]]; then
INDEX_TIME=$(date +"%Y-%m-%d %H:%M:%S")
echo "[INFO] ($INDEX_TIME) Updating archived binlog.index"
cp -va -- "$BINLOG_SRC_DIR/binlog.index" "$BINLOG_DIR/binlog.index"
fi
END_TIME=$(date +"%Y-%m-%d %H:%M:%S")
echo "------------------------------------------------------------"
echo "[INFO] Binlog Archiving Completed at: $END_TIME"
echo "------------------------------------------------------------"
✅ SCRIPT 11 — Backup retention
Path: `/opt/mysql-stack/xtrabackup/scripts/retention.sh`
#!/bin/bash
set -euo pipefail
. /scripts/config.sh
# retention.sh
# - keep last N full backup chains
# - remove corresponding incrementals for deleted chains
# - remove archived binlogs from $BINLOG_DIR ONLY, based on earliest remaining backup
#
# WARNING: this script deletes files. Test in non-prod first.
KEEP_FULL=${KEEP_FULL:-2}
echo "[INFO] Running retention - keeping last $KEEP_FULL full backup chains"
# --- helper: convert folder timestamp (YYYYMMDD-HHMMSS) to epoch seconds ---
stamp_to_epoch() {
local stamp=$1 # expected form: YYYYMMDD-HHMMSS
# defensive check
if [[ ! "$stamp" =~ ^[0-9]{8}-[0-9]{6}$ ]]; then
return 1
fi
local d=${stamp%%-*} # YYYYMMDD
local t=${stamp#*-} # HHMMSS
local t2="${t:0:2}:${t:2:2}:${t:4:2}"
# date -d accepts "YYYYMMDD HH:MM:SS"
date -d "${d} ${t2}" +%s 2>/dev/null || return 1
}
# --- step 1: collect full backups (oldest -> newest) ---
mapfile -t FULLS < <(find "$FULL_DIR" -maxdepth 1 -mindepth 1 -type d -name "full_*" | sort)
TOTAL_FULLS=${#FULLS[@]}
echo "[INFO] Found $TOTAL_FULLS full backup(s)."
if (( TOTAL_FULLS <= KEEP_FULL )); then
echo "[INFO] Nothing to delete (total fulls <= KEEP_FULL)."
else
DELETE_COUNT=$(( TOTAL_FULLS - KEEP_FULL ))
DELETE_FULLS=("${FULLS[@]:0:DELETE_COUNT}")
echo "[INFO] Will delete $DELETE_COUNT full backup chain(s):"
printf " %s\n" "${DELETE_FULLS[@]}"
# delete each full chain and its incrementals between this full and next kept full
for idx in $(seq 0 $((DELETE_COUNT - 1))); do
FULL_PATH="${DELETE_FULLS[$idx]}"
FULL_BASENAME=$(basename "$FULL_PATH")
FULL_TIME=$(echo "$FULL_BASENAME" | cut -d_ -f2)
echo "[INFO] Deleting full backup: $FULL_PATH"
rm -rf -- "$FULL_PATH"
# determine the 'next kept full' time (the first kept full is at index DELETE_COUNT)
NEXT_FULL_TIME=""
if [[ -n "${FULLS[$DELETE_COUNT]:-}" ]]; then
NEXT_FULL_TIME=$(basename "${FULLS[$DELETE_COUNT]}" | cut -d_ -f2)
fi
# find and delete incrementals that belong to this removed chain
mapfile -t INCR_LIST < <(find "$INCR_DIR" -maxdepth 1 -mindepth 1 -type d -name "incr_*" | sort)
for INC in "${INCR_LIST[@]}"; do
INC_BASENAME=$(basename "$INC")
INC_TIME=$(echo "$INC_BASENAME" | cut -d_ -f2)
if [[ -n "$NEXT_FULL_TIME" ]]; then
if [[ "$INC_TIME" > "$FULL_TIME" && "$INC_TIME" < "$NEXT_FULL_TIME" ]]; then
echo "[INFO] Removing incremental: $INC"
rm -rf -- "$INC"
fi
else
# no next full (we deleted older ones), remove incrementals after FULL_TIME
if [[ "$INC_TIME" > "$FULL_TIME" ]]; then
echo "[INFO] Removing incremental: $INC"
rm -rf -- "$INC"
fi
fi
done
done
fi
# --------------------------
# BINLOG RETENTION (BACKUP DIR ONLY)
# --------------------------
echo "[INFO] Determining earliest required backup timestamp for binlog retention..."
# earliest remaining full
mapfile -t REM_FULLS < <(find "$FULL_DIR" -maxdepth 1 -mindepth 1 -type d -name "full_*" | sort)
if (( ${#REM_FULLS[@]} == 0 )); then
echo "[WARN] No full backups remain. Will NOT delete any archived binlogs."
exit 0
fi
EARLIEST_FULL_BASENAME=$(basename "${REM_FULLS[0]}")
EARLIEST_FULL_TIME=$(echo "$EARLIEST_FULL_BASENAME" | cut -d_ -f2)
# earliest remaining incremental (if any)
mapfile -t REM_INCR < <(find "$INCR_DIR" -maxdepth 1 -mindepth 1 -type d -name "incr_*" | sort)
EARLIEST_TIME="$EARLIEST_FULL_TIME"
if (( ${#REM_INCR[@]} > 0 )); then
FIRST_INCR_BASENAME=$(basename "${REM_INCR[0]}")
FIRST_INCR_TIME=$(echo "$FIRST_INCR_BASENAME" | cut -d_ -f2)
# choose the earlier timestamp
if [[ "$FIRST_INCR_TIME" < "$EARLIEST_TIME" ]]; then
EARLIEST_TIME="$FIRST_INCR_TIME"
fi
fi
echo "[INFO] Earliest required backup timestamp (folder-based): $EARLIEST_TIME"
# Convert earliest required backup timestamp to epoch
EARLIEST_EPOCH=$(stamp_to_epoch "$EARLIEST_TIME" || true)
if [[ -z "${EARLIEST_EPOCH:-}" ]]; then
echo "[ERROR] Could not parse earliest backup timestamp '$EARLIEST_TIME'. Aborting binlog deletion."
exit 1
fi
# We will attempt to determine earliest required archived binlog in BINLOG_DIR.
# Use binlog.index in $BINLOG_DIR (copied from MySQL datadir) if present.
if [[ -f "$BINLOG_DIR/binlog.index" ]]; then
echo "[INFO] Using $BINLOG_DIR/binlog.index to determine earliest required archived binlog."
# read entries in index in order
EARLIEST_BINLOG_NEEDED=""
while IFS= read -r line; do
[[ -z "$line" ]] && continue
BL_NAME=$(basename "$line")
BL_PATH="$BINLOG_DIR/$BL_NAME"
# skip if archived copy doesn't exist
if [[ ! -f "$BL_PATH" ]]; then
# not present in archive; skip
continue
fi
# get mtime of archived binlog
BL_MTIME=$(stat -c %Y "$BL_PATH")
if (( BL_MTIME >= EARLIEST_EPOCH )); then
EARLIEST_BINLOG_NEEDED="$BL_NAME"
break
fi
done < "$BINLOG_DIR/binlog.index"
if [[ -z "$EARLIEST_BINLOG_NEEDED" ]]; then
echo "[WARN] Could not find an archived binlog in index with mtime >= earliest backup epoch."
echo "[WARN] To be safe, will NOT delete any archived binlogs. If you are sure, re-run with a manual override."
exit 0
fi
else
# fallback: compute by scanning archived binlogs' mtimes in BINLOG_DIR
echo "[WARN] No binlog.index found in $BINLOG_DIR; falling back to scanning archived binlog mtimes."
EARLIEST_BINLOG_NEEDED=""
for BL in "$BINLOG_DIR"/binlog.[0-9]*; do
[[ -e "$BL" ]] || continue
BL_BASE=$(basename "$BL")
BL_MTIME=$(stat -c %Y "$BL")
if (( BL_MTIME >= EARLIEST_EPOCH )); then
EARLIEST_BINLOG_NEEDED="$BL_BASE"
break
fi
done
if [[ -z "$EARLIEST_BINLOG_NEEDED" ]]; then
echo "[WARN] No archived binlog has mtime >= earliest backup epoch. Will NOT delete any archived binlogs."
exit 0
fi
fi
echo "[INFO] Earliest required archived binlog: $EARLIEST_BINLOG_NEEDED"
# extract numeric index from name 'binlog.000123'
if [[ ! "$EARLIEST_BINLOG_NEEDED" =~ ^binlog\.([0-9]+)$ ]]; then
echo "[ERROR] Unexpected binlog filename format: $EARLIEST_BINLOG_NEEDED. Aborting."
exit 1
fi
EARLIEST_NUM=${BASH_REMATCH[1]}
# now loop archived binlogs and delete those with numeric < EARLIEST_NUM
echo "[INFO] Deleting archived binlogs in $BINLOG_DIR with number < $EARLIEST_NUM"
deleted_any=0
for BL in "$BINLOG_DIR"/binlog.[0-9]*; do
[[ -e "$BL" ]] || continue
BL_BASE=$(basename "$BL")
if [[ "$BL_BASE" == "binlog.index" ]]; then
continue
fi
if [[ ! "$BL_BASE" =~ ^binlog\.([0-9]+)$ ]]; then
echo "[DEBUG] Skipping unexpected file in binlog dir: $BL_BASE"
continue
fi
BL_NUM=${BASH_REMATCH[1]}
# numeric compare using base 10 to avoid octal interpretation
if (( 10#$BL_NUM < 10#$EARLIEST_NUM )); then
echo "[INFO] Removing archived binlog: $BL_BASE"
rm -f -- "$BL"
deleted_any=1
fi
done
if (( deleted_any == 0 )); then
echo "[INFO] No archived binlogs needed deletion."
else
echo "[INFO] Archived binlog deletion completed."
fi
echo "[INFO] Retention completed."
exit 0
✅ SCRIPT 12 — Backup chain restoration
Path: `/opt/mysql-stack/xtrabackup/scripts/restore_backup.sh`
#!/bin/bash
set -euo pipefail
# ---------------------------------------
# Load MySQL and backup config
# ---------------------------------------
. /scripts/config.sh # <-- make sure path is correct
# ---------------------------------------
# Check if MySQL is running
# ---------------------------------------
echo "[INFO] Checking if MySQL is running..."
if mysqladmin ping -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" >/dev/null 2>&1; then
echo "[ERROR] MySQL is running. Stop it before running restore_backup_chain.sh"
echo "[INFO] docker stop <mysql-container>"
exit 1
fi
echo "[INFO] MySQL is not running — safe to restore."
# ---------------------------------------
# Disable cron inside container
# ---------------------------------------
echo "[INFO] Disabling cron inside container..."
pkill crond 2>/dev/null || true
if [[ -x /usr/sbin/crond ]]; then
mv /usr/sbin/crond /usr/sbin/crond.disabled 2>/dev/null || true
echo "[INFO] Cron disabled by renaming binary."
else
echo "[INFO] Cron already disabled."
fi
echo "[INFO] Cron daemon disabled successfully."
# ---------------------------------------
# Config paths
# ---------------------------------------
BACKUPS_BASE="/backups"
FULL_DIR="$BACKUPS_BASE/full"
INCR_DIR="$BACKUPS_BASE/incr"
MYSQL_DATADIR="/var/lib/mysql"
# ---------------------------------------
# 1) List available full backups
# ---------------------------------------
echo "[INFO] Available full backups:"
mapfile -t FULL_BACKUPS < <(find "$FULL_DIR" -maxdepth 1 -type d -name "full_*" | sort)
if [[ ${#FULL_BACKUPS[@]} -eq 0 ]]; then
echo "[ERROR] No full backups found in: $FULL_DIR"
exit 1
fi
for i in "${!FULL_BACKUPS[@]}"; do
echo " [$i] $(basename "${FULL_BACKUPS[$i]}")"
done
# ---------------------------------------
# 2) Prompt user to select a full backup
# ---------------------------------------
read -rp "[INPUT] Enter the number of the full backup to restore (default: latest): " CHOICE
if [[ -z "$CHOICE" ]]; then
CHOICE=$((${#FULL_BACKUPS[@]} - 1)) # default: latest
fi
if ! [[ "$CHOICE" =~ ^[0-9]+$ ]] || (( CHOICE < 0 || CHOICE >= ${#FULL_BACKUPS[@]} )); then
echo "[ERROR] Invalid choice."
exit 1
fi
LATEST_FULL="${FULL_BACKUPS[$CHOICE]}"
echo "[INFO] Selected FULL backup: $LATEST_FULL"
# Extract timestamp
FULL_TS=$(basename "$LATEST_FULL" | cut -d_ -f2)
# ---------------------------------------
# 3) Collect incremental backups newer than selected full
# ---------------------------------------
echo "[INFO] Searching for incremental backups..."
mapfile -t INCR_LIST < <(
find "$INCR_DIR" -maxdepth 1 -type d -name "incr_*" | sort |
while read -r DIR; do
INC_TS=$(basename "$DIR" | cut -d_ -f2)
if [[ "$INC_TS" > "$FULL_TS" ]]; then
echo "$DIR"
fi
done
)
if [[ ${#INCR_LIST[@]} -eq 0 ]]; then
echo "[INFO] No incremental backups found after selected full."
else
echo "[INFO] Incrementals found: ${#INCR_LIST[@]}"
for i in "${!INCR_LIST[@]}"; do
echo " [$i] $(basename "${INCR_LIST[$i]}")"
done
# Prompt user to select last incremental to apply
read -rp "[INPUT] Enter the number of the last incremental to apply (default: latest): " INC_CHOICE
if [[ -z "$INC_CHOICE" ]]; then
INC_CHOICE=$((${#INCR_LIST[@]} - 1)) # default: latest
fi
if ! [[ "$INC_CHOICE" =~ ^[0-9]+$ ]] || (( INC_CHOICE < 0 || INC_CHOICE >= ${#INCR_LIST[@]} )); then
echo "[ERROR] Invalid choice."
exit 1
fi
# Apply only up to selected incremental
INCR_LIST=("${INCR_LIST[@]:0:INC_CHOICE+1}")
fi
# ---------------------------------------
# Build restore chain
# ---------------------------------------
CHAIN=("$LATEST_FULL" "${INCR_LIST[@]}")
echo "[INFO] Restore chain:"
printf " %s\n" "${CHAIN[@]}"
# ---------------------------------------
# 4) Prepare backup chain
# ---------------------------------------
echo "[INFO] Preparing full backup (apply-log-only)..."
xtrabackup --prepare --apply-log-only --target-dir="${CHAIN[0]}"
for ((i=1; i<${#CHAIN[@]}; i++)); do
echo "[INFO] Applying incremental: ${CHAIN[$i]}"
xtrabackup --prepare --apply-log-only --target-dir="${CHAIN[0]}" --incremental-dir="${CHAIN[$i]}"
done
echo "[INFO] Final prepare..."
xtrabackup --prepare --target-dir="${CHAIN[0]}"
echo "[INFO] Backup chain fully prepared."
# ---------------------------------------
# 5) Clean MySQL datadir
# ---------------------------------------
echo "[INFO] Cleaning datadir: $MYSQL_DATADIR"
find "$MYSQL_DATADIR" -mindepth 1 -maxdepth 1 -exec rm -rf {} +
echo "[INFO] Datadir cleaned."
# ---------------------------------------
# 6) Restore files
# ---------------------------------------
echo "[INFO] Running xtrabackup --copy-back..."
xtrabackup --copy-back --target-dir="${CHAIN[0]}"
# ---------------------------------------
# 7) Fix permissions
# ---------------------------------------
echo "[INFO] Fixing permissions..."
chown -R mysql:mysql "$MYSQL_DATADIR"
echo "---------------------------------------------"
echo "[SUCCESS] Backup restored successfully!"
echo "Now exit the container and run:"
echo " docker start <mysql-container>"
echo "Then replay the binary logs by script:"
echo " ./apply_binlogs.sh"
echo "---------------------------------------------"
exit 0
✅ SCRIPT 13 — Apply binary logs (Point in Time Recovery)
Path: `/opt/mysql-stack/xtrabackup/scripts/apply_binlogs.sh`
#!/usr/bin/env bash
set -euo pipefail
# apply_binlogs.sh (optimized)
# - applies archived binlogs in $BINLOG_DIR for PITR using --idempotent
# - speeds up replay by toggling safe globals and filtering privilege statements
# - never touches /var/lib/mysql, only reads archived copies in $BINLOG_DIR
#
# Requirements:
# - /scripts/config.sh must set FULL_DIR, BINLOG_DIR, CHAIN_FILE,
# MYSQL_HOST, MYSQL_PORT, XTRABACKUP_USER, XTRABACKUP_PASSWORD, etc.
# - binlog files must be present in $BINLOG_DIR (binlog.000001 ...). binlog.index is ignored.
#
# Optional env vars:
# USE_TMPFS=true # copy binlogs to /dev/shm/<tmpdir> before applying (requires enough RAM)
# DRY_RUN=true # don't actually execute mysql, just show what would be done
. /scripts/config.sh
DRY_RUN="${DRY_RUN:-false}"
USE_TMPFS="${USE_TMPFS:-false}"
echo "---------------------------------------------"
echo "[PITR OPT] Optimized Point-In-Time Recovery with --idempotent"
echo "---------------------------------------------"
# helper for quitting and restoring globals
cleanup_and_exit() {
local rc=${1:-0}
# restore original globals if set
for var in FOREIGN UNIQUE SYNC FLUSH; do
val="_ORIG_$var"
[[ -n "${!val:-}" ]] || continue
echo "[INFO] Restoring GLOBAL ${var}_CHECKS=${!val}"
case "$var" in
FOREIGN) mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SET GLOBAL FOREIGN_KEY_CHECKS=${!val};" || true ;;
UNIQUE) mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SET GLOBAL UNIQUE_CHECKS=${!val};" || true ;;
SYNC) mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SET GLOBAL sync_binlog=${!val};" || true ;;
FLUSH) mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SET GLOBAL innodb_flush_log_at_trx_commit=${!val};" || true ;;
esac
done
# remove tmpdir if created
if [[ -n "${_TMP_BINLOG_DIR:-}" && -d "$_TMP_BINLOG_DIR" ]]; then
[[ "$USE_TMPFS" == "true" ]] && echo "[INFO] Removing tmpfs binlog copy: $_TMP_BINLOG_DIR" && rm -rf -- "$_TMP_BINLOG_DIR"
fi
exit "$rc"
}
trap 'cleanup_and_exit 1' INT TERM
# 1) ensure chain metadata exists
[[ -f "$CHAIN_FILE" ]] || { echo "[ERROR] chain.info not found at $CHAIN_FILE"; exit 1; }
. "$CHAIN_FILE"
# 2) pick restore dir (prompt user for full backup)
echo "Available full backups:"
mapfile -t FULL_BACKUPS < <(ls -1 "$FULL_DIR" | sort)
for i in "${!FULL_BACKUPS[@]}"; do
echo " $((i+1))) ${FULL_BACKUPS[i]}"
done
read -rp "Select full backup to use: " SEL
SEL=$((SEL-1))
[[ $SEL -ge 0 && $SEL -lt ${#FULL_BACKUPS[@]} ]] || { echo "[ERROR] Invalid selection"; exit 1; }
RESTORE_DIR="$FULL_DIR/${FULL_BACKUPS[$SEL]}"
echo "[INFO] Selected full backup: $RESTORE_DIR"
BINLOG_INFO="$RESTORE_DIR/xtrabackup_binlog_info"
[[ -f "$BINLOG_INFO" ]] || { echo "[ERROR] xtrabackup_binlog_info not found at $BINLOG_INFO"; exit 1; }
read BINLOG_FILE BINLOG_POS < "$BINLOG_INFO"
echo "[INFO] Backup snapshot ends at:"
echo " Binlog File : $BINLOG_FILE"
echo " Start Position : $BINLOG_POS"
echo
# 3) prepare list of archived binlogs
NEEDED=()
FOUND=0
for f in "$BINLOG_DIR"/binlog.[0-9]*; do
[[ -e "$f" ]] || continue
base=$(basename "$f")
[[ "$base" == "binlog.index" ]] && continue
if [[ "$base" == "$BINLOG_FILE" ]]; then
FOUND=1
NEEDED+=("$f")
continue
fi
(( FOUND == 1 )) && NEEDED+=("$f")
done
[[ ${#NEEDED[@]} -gt 0 ]] || { echo "[ERROR] No archived binlogs found after $BINLOG_FILE"; exit 1; }
echo "[INFO] Found ${#NEEDED[@]} archived binlog(s) to consider:"
printf " %s\n" "${NEEDED[@]}"
echo
# 4) optionally copy to tmpfs
_TMP_BINLOG_DIR=""
if [[ "$USE_TMPFS" == "true" ]]; then
_TMP_BINLOG_DIR="$(mktemp -d /dev/shm/pitr_binlogs.XXXXXX)"
echo "[INFO] Copying archived binlogs to tmpfs ($_TMP_BINLOG_DIR)"
for src in "${NEEDED[@]}"; do cp -a -- "$src" "$_TMP_BINLOG_DIR"/; done
mapfile -t NEEDED < <(find "$_TMP_BINLOG_DIR" -maxdepth 1 -type f -name "binlog.*" | sort)
fi
# 5) prompt user for mode
echo "Select recovery mode:"
echo " 1) Apply all binlogs"
echo " 2) Stop at timestamp (YYYY-MM-DD HH:MM:SS)"
echo " 3) Stop at position (binlog-file:position)"
read -rp "Choice: " MODE
STOP_OPTS=()
STOP_BIN=""
STOP_POS=""
case "$MODE" in
1) echo "[INFO] Will apply all selected binlogs." ;;
2)
read -rp "Enter stop timestamp (YYYY-MM-DD HH:MM:SS): " TS
STOP_OPTS=(--stop-datetime="$TS")
echo "[INFO] Will stop at timestamp: $TS"
;;
3)
read -rp "Enter stop file:pos (e.g. binlog.000123:456): " STOPFP
STOP_BIN="${STOPFP%%:*}"
STOP_POS="${STOPFP##*:}"
echo "[INFO] Will stop at $STOP_BIN:$STOP_POS"
;;
*) echo "[ERROR] Invalid option"; cleanup_and_exit 1 ;;
esac
# 6) capture and set speed-affecting globals
echo "[INFO] Capturing current MySQL global settings..."
_ORIG_FOREIGN=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.foreign_key_checks;")
_ORIG_UNIQUE=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.unique_checks;")
_ORIG_SYNC=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.sync_binlog;")
_ORIG_FLUSH=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.innodb_flush_log_at_trx_commit;")
echo "[INFO] Current: FOREIGN_KEY_CHECKS=$_ORIG_FOREIGN UNIQUE_CHECKS=$_ORIG_UNIQUE sync_binlog=$_ORIG_SYNC innodb_flush_log_at_trx_commit=$_ORIG_FLUSH"
[[ "$DRY_RUN" != "true" ]] && mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" \
-e "SET GLOBAL FOREIGN_KEY_CHECKS=0; SET GLOBAL UNIQUE_CHECKS=0; SET GLOBAL sync_binlog=0; SET GLOBAL innodb_flush_log_at_trx_commit=0;" \
|| echo "[INFO] DRY_RUN: would set performance globals to 0."
trap 'cleanup_and_exit $?' INT TERM EXIT
# 7) filter statements that require GRANT/USER changes
FILTER_EXPR='
/^[[:space:]]*GRANT[[:space:]]/I d
/^[[:space:]]*REVOKE[[:space:]]/I d
/^[[:space:]]*CREATE[[:space:]]+USER[[:space:]]/I d
/^[[:space:]]*DROP[[:space:]]+USER[[:space:]]/I d
/^[[:space:]]*SET[[:space:]]+PASSWORD[[:space:]]/I d
'
# 8) apply binlogs
START_POS="$BINLOG_POS"
FIRST=1
for BIN in "${NEEDED[@]}"; do
BASE_BIN=$(basename "$BIN")
MB_ARGS=( --idempotent --force-if-open )
(( FIRST == 1 )) && MB_ARGS+=( --start-position="$START_POS" ) || MB_ARGS+=( --start-position=4 )
if [[ "$MODE" == "2" ]]; then
MB_ARGS+=( "${STOP_OPTS[@]}" )
elif [[ "$MODE" == "3" && "$BASE_BIN" == "$STOP_BIN" ]]; then
MB_ARGS+=( --stop-position="$STOP_POS" )
APPLY_LAST=1
fi
TMP_SQL="$(mktemp /tmp/pitr_sql.XXXXXX)"
trap 'rm -f -- "$TMP_SQL" || true; cleanup_and_exit 1' INT TERM
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY_RUN] mysqlbinlog ${MB_ARGS[*]} \"$BIN\" | (filter) | mysql -h \"$MYSQL_HOST\" -P \"$MYSQL_PORT\" -u\"$XTRABACKUP_USER\" -p\"<password>\""
rm -f -- "$TMP_SQL"
FIRST=0
continue
fi
mysqlbinlog "${MB_ARGS[@]}" "$BIN" > "$TMP_SQL" || { echo "[ERROR] mysqlbinlog failed on $BIN"; rm -f "$TMP_SQL"; cleanup_and_exit 1; }
sed -E "$FILTER_EXPR" "$TMP_SQL" > "${TMP_SQL}.filtered" || { echo "[ERROR] filtering failed for $BIN"; rm -f "$TMP_SQL" "${TMP_SQL}.filtered"; cleanup_and_exit 1; }
preview=$(grep -m1 -E -v '^[[:space:]]*$|^#|^--' "${TMP_SQL}.filtered" || true)
[[ -n "$preview" ]] && echo "[DEBUG] First SQL after filter: ${preview:0:200}" || echo "[INFO] No executable SQL in $BASE_BIN after filtering; skipping."
echo "[INFO] Piping filtered SQL into MySQL..."
mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" < "${TMP_SQL}.filtered" \
|| { echo "[ERROR] mysql failed on $BASE_BIN"; rm -f "$TMP_SQL" "${TMP_SQL}.filtered"; cleanup_and_exit 1; }
echo "[INFO] Successfully applied $BASE_BIN"
rm -f -- "$TMP_SQL" "${TMP_SQL}.filtered"
FIRST=0
[[ "${APPLY_LAST:-}" == "1" ]] && echo "[INFO] Reached stop binlog $STOP_BIN, stopping apply." && break
done
echo "[INFO] All requested binlogs processed. Restoring MySQL globals..."
cleanup_and_exit 0
✅ SCRIPT 14 — Apply binary logs up to maximum point (Point in Time Recovery)
Path: `/opt/mysql-stack/xtrabackup/scripts/apply_binlogs_max_pitr.sh`
#!/usr/bin/env bash
set -euo pipefail
. /scripts/config.sh # MYSQL_HOST, MYSQL_PORT, XTRABACKUP_USER, XTRABACKUP_PASSWORD, BINLOG_DIR
DRY_RUN="${DRY_RUN:-false}"
USE_TMPFS="${USE_TMPFS:-false}"
echo "[INFO] Available binlogs in $BINLOG_DIR:"
mapfile -t BINLOGS < <(ls -1 "$BINLOG_DIR"/binlog.[0-9]* | sort)
for i in "${!BINLOGS[@]}"; do
echo " $((i+1))) $(basename "${BINLOGS[i]}")"
done
read -rp "Enter start binlog (number or name): " START_BIN
read -rp "Enter end binlog (number or name): " END_BIN
START_BIN_FULL="$BINLOG_DIR/$START_BIN"
END_BIN_FULL="$BINLOG_DIR/$END_BIN"
[[ -f "$START_BIN_FULL" ]] || { echo "[ERROR] Start binlog not found"; exit 1; }
[[ -f "$END_BIN_FULL" ]] || { echo "[ERROR] End binlog not found"; exit 1; }
APPLY_LIST=()
FOUND=0
for BIN in "${BINLOGS[@]}"; do
[[ "$BIN" == "$START_BIN_FULL" ]] && FOUND=1
(( FOUND == 1 )) && APPLY_LIST+=("$BIN")
[[ "$BIN" == "$END_BIN_FULL" ]] && break
done
echo "[INFO] Binlogs selected to apply:"
printf " %s\n" "${APPLY_LIST[@]}"
read -rp "Enter stop position for last binlog (empty to apply whole binlog): " STOP_POS
# 1) capture current MySQL globals
_ORIG_FOREIGN=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.foreign_key_checks;")
_ORIG_UNIQUE=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.unique_checks;")
_ORIG_SYNC=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.sync_binlog;")
_ORIG_FLUSH=$(mysql -N -s -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "SELECT @@GLOBAL.innodb_flush_log_at_trx_commit;")
if [[ "$DRY_RUN" != "true" ]]; then
echo "[INFO] Setting globals to speed up apply..."
mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "
SET GLOBAL FOREIGN_KEY_CHECKS=0;
SET GLOBAL UNIQUE_CHECKS=0;
SET GLOBAL sync_binlog=0;
SET GLOBAL innodb_flush_log_at_trx_commit=0;
"
else
echo "[DRY_RUN] Would set speed-up globals"
fi
# 2) optionally copy to tmpfs for faster reads
TMP_BIN_DIR=""
if [[ "$USE_TMPFS" == "true" ]]; then
TMP_BIN_DIR=$(mktemp -d /dev/shm/binlogs.XXXXXX)
echo "[INFO] Copying binlogs to tmpfs $TMP_BIN_DIR ..."
for f in "${APPLY_LIST[@]}"; do cp -a "$f" "$TMP_BIN_DIR/"; done
mapfile -t APPLY_LIST < <(find "$TMP_BIN_DIR" -maxdepth 1 -type f -name "binlog.*" | sort)
fi
# 3) apply binlogs
for BIN in "${APPLY_LIST[@]}"; do
BASE_BIN=$(basename "$BIN")
echo "[INFO] Applying $BASE_BIN (idempotent)"
MB_ARGS=( --idempotent --force-if-open )
if [[ "$BIN" == "$END_BIN_FULL" && -n "$STOP_POS" ]]; then
MB_ARGS+=( --stop-position="$STOP_POS" )
fi
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY_RUN] mysqlbinlog ${MB_ARGS[*]} \"$BIN\" | mysql -h \"$MYSQL_HOST\" -P \"$MYSQL_PORT\" -u\"$XTRABACKUP_USER\" -p\"<password>\""
continue
fi
mysqlbinlog "${MB_ARGS[@]}" "$BIN" | mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD"
echo "[INFO] Applied $BASE_BIN"
done
# 4) restore globals
if [[ "$DRY_RUN" != "true" ]]; then
echo "[INFO] Restoring MySQL globals..."
mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u"$XTRABACKUP_USER" -p"$XTRABACKUP_PASSWORD" -e "
SET GLOBAL FOREIGN_KEY_CHECKS=$_ORIG_FOREIGN;
SET GLOBAL UNIQUE_CHECKS=$_ORIG_UNIQUE;
SET GLOBAL sync_binlog=$_ORIG_SYNC;
SET GLOBAL innodb_flush_log_at_trx_commit=$_ORIG_FLUSH;
"
fi
# 5) cleanup tmpfs
[[ -n "$TMP_BIN_DIR" ]] && rm -rf "$TMP_BIN_DIR"
echo "[INFO] Binlog range applied successfully (idempotent)."




Comments
Post a Comment