Scripts to take MySQL 8.* full and incremental backups using mysqldump


You can take full and incremental backups of MySQL 8.* versions using the provided scripts here. 

This script is suitable for a standalone mysql installation, it also includes optional upload of your backup files to s3 cloud storage and encryption support.

The first script is for the configuration where backup and restoration metadata or config are mentioned.

The second script is to take actual backups. This script will take a full backup for the current date if no full backup exists. If it finds a full backup for the current day, it will take incremental backups as a means of taking binary log backups.

Schedule the backup frequency with cron as per your requirements.

backup_restore.conf – Configuration File

# MySQL credentials
MYSQL_USER="myadmin"
MYSQL_PASSWORD="myadmin@123"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"

# Backup settings
BACKUP_DIR="/backups/mysql"
FULL_BACKUP_DIR="$BACKUP_DIR/full"
INCR_BACKUP_DIR="$BACKUP_DIR/incr"
LOG_DIR="$BACKUP_DIR/logs"
RETENTION_DAYS=7

# GPG encryption (optional)
GPG_ENABLED=false
GPG_RECIPIENT="backup@yourdomain.com"

# S3 upload (optional)
S3_ENABLED=false
S3_BUCKET="s3://your-bucket-name"
AWS_PROFILE="default"

# Binary log settings for PITR
BINLOG_DIR="/var/lib/mysql"
START_BINLOG=""  # Set this to the binary log file name at maximum PITR restore time
END_BINLOG=""    # Set this to the binary log file name at maximum PITR restore time

# Usage notes:
# - Ensure BINLOG_DIR points to the directory where MySQL binary logs are stored.
# - START_BINLOG should be set to the binary log file name corresponding to the backup's last log position.
# - These variables are used by restore and verify scripts to apply binary logs for point-in-time recovery.
# - Keep backup_restore.conf secure as it contains sensitive credentials and keys.


backup_standalone.sh – Full & Incremental Backup Script


#!/bin/bash
set -euo pipefail
source "$(dirname "$0")/backup_restore.conf"
TODAY=$(date +%F)
NOW=$(date +%H%M%S)
LOG_FILE="$LOG_DIR/backup-$TODAY.log"

ROWCOUNT_FILE="$LOG_DIR/backup_rowcounts.txt"
ENCRYPTED_ROWCOUNT="$ROWCOUNT_FILE.gpg"

exec > >(tee -a "$LOG_FILE") 2>&1
mkdir -p "$FULL_BACKUP_DIR" "$INCR_BACKUP_DIR" "$LOG_DIR"
echo "=== MySQL Backup Started at $(date) ==="

# === STEP 1: Check for today's full backup ===
LATEST_FULL=$(find "$FULL_BACKUP_DIR" -type f -name "full-$TODAY*.sql.gz" | sort | tail -n 1)
if [[ -z "$LATEST_FULL" ]]; then
  echo "⚠️ No full backup for today found. Creating full backup..."
  FULL_PATH="$FULL_BACKUP_DIR/full-$TODAY-$NOW.sql.gz"
  mysqldump --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --host="$MYSQL_HOST" --port="$MYSQL_PORT" \
    --single-transaction --flush-logs --master-data=2 --all-databases --routines --events --triggers --delete-master-logs | gzip > "$FULL_PATH"
  echo "✅ Full backup created: $FULL_PATH"

  # Upload full backup to S3
  if $S3_ENABLED; then
    S3_KEY="full/full-$TODAY-$NOW.sql.gz"
    echo "Uploading full backup to S3 as $S3_KEY..."
    aws s3 cp "$FULL_PATH" "$S3_BUCKET/$S3_KEY" --profile "$AWS_PROFILE"
    aws s3api head-object --bucket "$(echo "$S3_BUCKET" | cut -d'/' -f3)" --key "$S3_KEY" --profile "$AWS_PROFILE" >/dev/null \
      && echo "✅ S3 upload verified: $S3_KEY" \
      || echo "❌ S3 upload verification failed for $S3_KEY"
  fi
  LATEST_FULL="$FULL_PATH"
else
  echo "✅ Found full backup for today: $LATEST_FULL"
fi

# === STEP 2: Extract last binlog from full backup ===
LAST_BINLOG=$(gunzip -c "$LATEST_FULL" | awk -F"'" '/CHANGE MASTER TO/ {print $2}' | head -n 1)
if [[ -z "$LAST_BINLOG" ]]; then
  echo "❌ Could not extract binlog name."
  exit 1
fi
echo "Last binlog: $LAST_BINLOG"

# === STEP 3: Flush logs to rotate binlog ===
mysqladmin --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" flush-logs


# === STEP 4: Copy binlogs starting from LAST_BINLOG (inclusive) ===
INCR_PATH="$INCR_BACKUP_DIR/incr-$TODAY-$NOW"
mkdir -p "$INCR_PATH"
START_COPY=false
for LOG in $(ls -1 "$BINLOG_DIR"/binlog.*); do
  LOG_NAME=$(basename "$LOG")
  if [[ "$LOG_NAME" == "$LAST_BINLOG" ]]; then
    START_COPY=true
  fi
  if $START_COPY; then
    echo "Backing up $LOG_NAME"
    if $GPG_ENABLED; then
      gpg --encrypt --recipient "$GPG_RECIPIENT" --output "$INCR_PATH/$LOG_NAME.gpg" "$LOG"
    else
      cp "$LOG" "$INCR_PATH/"
    fi
  fi
done

# === STEP 5: Upload incremental backup to S3 ===
if $S3_ENABLED; then
  S3_KEY="incr/incr-$TODAY-$NOW"
  echo "Uploading incremental backup to S3 as $S3_KEY/"
  aws s3 cp "$INCR_PATH" "$S3_BUCKET/$S3_KEY/" --recursive --profile "$AWS_PROFILE"
  echo "✅ Incremental backup uploaded to S3."
fi

# === STEP 6: Generate row count snapshot ===
echo "Generating row count snapshot..."
> "$ROWCOUNT_FILE"
DBS=$(mysql -N -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" -h"$MYSQL_HOST" -P"$MYSQL_PORT" \
  -e "SHOW DATABASES;" | grep -Ev "^(information_schema|performance_schema|mysql|sys)$")
for db in $DBS; do
  TABLES=$(mysql -N -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" -h"$MYSQL_HOST" -P"$MYSQL_PORT" \
    -e "SHOW TABLES IN \`$db\`;")
  for table in $TABLES; do
    COUNT=$(mysql -N -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" -h"$MYSQL_HOST" -P"$MYSQL_PORT" \
      -e "SELECT COUNT(*) FROM \`$db\`.\`$table\`;")
    echo "$db $table $COUNT" >> "$ROWCOUNT_FILE"
    echo "✅ $db.$table: $COUNT rows"
  done
done
if $GPG_ENABLED; then
  gpg --yes --batch -r "$GPG_RECIPIENT" -o "$ENCRYPTED_ROWCOUNT" -e "$ROWCOUNT_FILE"
  echo "✅ Encrypted row count file: $ENCRYPTED_ROWCOUNT"
fi
if $S3_ENABLED; then
  S3_KEY="rowcounts/backup_rowcounts-$TODAY-$NOW.txt.gpg"
  aws s3 cp "$ENCRYPTED_ROWCOUNT" "$S3_BUCKET/$S3_KEY" --profile "$AWS_PROFILE"
  echo "✅ Row count snapshot uploaded to S3: $S3_KEY"
fi
# === Summary ===
echo "=== Backup Summary ==="
echo "Full backup: $LATEST_FULL"
echo "Incremental binlogs: $(ls "$INCR_PATH" | wc -l) files"
[[ -f "$ENCRYPTED_ROWCOUNT" ]] && echo "Row count snapshot: $ENCRYPTED_ROWCOUNT"
echo "Log file: $LOG_FILE"
echo "=== MySQL Backup Finished at $(date) ==="


Comments