From XtraBackup to RDS: A Production-Grade MySQL Restore Pipeline



Migrating or restoring large MySQL databases into Amazon RDS is not as simple as running a single restore command—especially
when you require minimal local storage, full automation, and safe cleanup.

In this post, I’ll walk through a fully automated, production-grade pipeline that:

* Streams Percona XtraBackup from an on-prem MySQL server directly to Amazon S3
* Uses a temporary EC2 worker to prepare the backup in an RDS-compatible format
* Automatically self-terminates the EC2 instance once its job is complete
* Restores the prepared backup into a new Amazon RDS MySQL instance
* Cleans up IAM roles, instance profiles, and SSH keys safely

This design avoids long-running EC2 instances, reduces operational cost, and strictly follows least-privilege IAM principles.


When This Pipeline Is Useful

This approach is well-suited for:

* Large database migrations
* Disaster recovery testing
* Controlled production restores
* Compliance-driven environments


High-Level Architecture

+-------------------+
| 🗄️  On-Prem MySQL |
+-------------------+
          |
          |  XtraBackup (xbstream)
          v
+----------------------------+
| ☁️  Amazon S3              |
|    (Raw Backup)            |
+----------------------------+
          |
          |  Download & Extract
          v
+------------------------------------+
| 🖥️  Temporary EC2 Worker            |
|------------------------------------|
| • xbstream extract                  |
| • zstd decompress                   |
| • xtrabackup --prepare              |
| • Upload prepared backup to S3      |
| • Self-terminate                    |
+------------------------------------+
          |
          |  Prepared files
          v
+----------------------------+
| ☁️  Amazon S3              |
|    (Prepared Backup)       |
+----------------------------+
          |
          |  restore-db-instance-from-s3
          v
+----------------------------+
| 🛢️  Amazon RDS MySQL        |
|    (New DB Instance)       |
+----------------------------+


Component Responsibilities (At a Glance)

On-Prem MySQL

* Generates a physical backup using Percona XtraBackup
* Streams the backup directly to Amazon S3 (no large local storage)

Amazon S3 (Raw Backup)

* Stores the unprepared `xbstream` backup
* Acts as the handoff point between on-prem and AWS

Temporary EC2 Worker

* Downloads the raw backup from S3
* Extracts, decompresses, and prepares the backup
* Uploads the prepared backup back to S3
* Terminates itself after completion

Amazon S3 (Prepared Backup)

* Stores the RDS-compatible prepared backup
* Used directly by Amazon RDS during restore

Amazon RDS MySQL

* Pulls the prepared backup from S3
* Restores into a new RDS instance
* Has no dependency on the EC2 worker


Execution Order

On-Prem (Source Database)

bash onprem_xtrabackup_stream_to_s3.sh

AWS CLI (One-Time Setup)

bash create_ec2_restore_role.sh
bash create_rds_import_role.sh
bash launch_restore_ec2.sh

On EC2 (Worker)

bash on_ec2_restore_prepare_upload_and_terminate.sh

AWS CLI (Control Plane)

bash restore_to_rds_and_monitor.sh
bash cleanup_restore_resources.sh

Optional: Archive or Delete Backups in S3

Archive prepared backups (recommended):

aws s3 mv s3://mysql-xtrabackup-backups/prepared/ \
          s3://mysql-xtrabackup-backups/archive/ \
          --recursive

Or delete them entirely:

aws s3 rm s3://mysql-xtrabackup-backups/prepared/ --recursive


Deployment Scripts

> The following scripts form the complete, automated pipeline.
> They are designed to be executed in order, with clear separation between worker and control-plane responsibilities.
> Please edit the variables used in the scripts as required by your environment.


onprem_xtrabackup_stream_to_s3.sh

#!/usr/bin/env bash
set -euo pipefail

MYSQL_HOST="mysql"
MYSQL_PORT="3306"
MYSQL_USER="xtraback"
MYSQL_PASS="xtraback@123"

S3_BUCKET="mysql-xtrabackup-backups"
S3_KEY="raw/full-$(date +%F_%H%M%S).xbstream"

# Metadata-only directory (VERY SMALL)
META_DIR="/tmp/xtrabackup-meta"

rm -rf "$META_DIR"
mkdir -p "$META_DIR"

echo "[INFO] Streaming XtraBackup WITH metadata to S3"

xtrabackup \
  --backup \
  --host="$MYSQL_HOST" \
  --port="$MYSQL_PORT" \
  --user="$MYSQL_USER" \
  --password="$MYSQL_PASS" \
  --stream=xbstream \
  --compress=zstd \
  --target-dir="$META_DIR" \
| aws s3 cp - "s3://$S3_BUCKET/$S3_KEY"

echo "[DONE] Backup uploaded to s3://$S3_BUCKET/$S3_KEY"


create_ec2_restore_role.sh

#!/usr/bin/env bash
set -euo pipefail

########################################
# CONFIGURATION
########################################
ROLE="temp-ec2-xtrabackup-restore-role"
PROFILE="temp-ec2-xtrabackup-restore-profile"

BACKUP_BUCKET="mysql-xtrabackup-backups"
RDS_IMPORT_ROLE="temp-rds-s3-import-role"

########################################
# AUTO-DETECT ACCOUNT ID
########################################
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

echo "[INFO] AWS Account ID : $ACCOUNT_ID"
echo "[INFO] EC2 Role      : $ROLE"
echo "[INFO] Instance Prof : $PROFILE"

########################################
# CREATE ROLE (IF NOT EXISTS)
########################################
if ! aws iam get-role --role-name "$ROLE" >/dev/null 2>&1; then
  echo "[INFO] Creating IAM role: $ROLE"

  cat > trust.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "Service": "ec2.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }]
}
EOF

  aws iam create-role \
    --role-name "$ROLE" \
    --assume-role-policy-document file://trust.json

  echo "[OK] Role created"
else
  echo "[OK] Role already exists"
fi

########################################
# APPLY INLINE POLICY (SAFE + LOCKED)
########################################
echo "[INFO] Applying least-privilege inline policy"

cat > policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [

    {
      "Sid": "S3AccessForXtraBackup",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::$BACKUP_BUCKET",
        "arn:aws:s3:::$BACKUP_BUCKET/*"
      ]
    },

    {
      "Sid": "RDSRestorePermissions",
      "Effect": "Allow",
      "Action": [
        "rds:RestoreDBInstanceFromS3",
        "rds:DescribeDBInstances",
        "rds:DescribeDBEngineVersions"
      ],
      "Resource": "*"
    },

    {
      "Sid": "AllowPassOnlyRDSImportRole",
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::${ACCOUNT_ID}:role/${RDS_IMPORT_ROLE}",
      "Condition": {
        "StringEquals": {
          "iam:PassedToService": "rds.amazonaws.com"
        }
      }
    },

    {
      "Sid": "SelfTerminateOnlyTaggedInstances",
      "Effect": "Allow",
      "Action": "ec2:TerminateInstances",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/AutoTerminate": "true"
        }
      }
    }
  ]
}
EOF

aws iam put-role-policy \
  --role-name "$ROLE" \
  --policy-name temp-restore-policy \
  --policy-document file://policy.json

echo "[OK] Inline policy applied"

########################################
# ENSURE INSTANCE PROFILE EXISTS
########################################
if ! aws iam get-instance-profile \
  --instance-profile-name "$PROFILE" >/dev/null 2>&1; then

  echo "[INFO] Creating instance profile: $PROFILE"
  aws iam create-instance-profile \
    --instance-profile-name "$PROFILE"

  echo "[OK] Instance profile created"
else
  echo "[OK] Instance profile already exists"
fi

########################################
# ATTACH ROLE TO INSTANCE PROFILE
########################################
if ! aws iam get-instance-profile \
  --instance-profile-name "$PROFILE" \
  --query 'InstanceProfile.Roles[].RoleName' \
  --output text | grep -q "$ROLE"; then

  echo "[INFO] Attaching role to instance profile"
  aws iam add-role-to-instance-profile \
    --instance-profile-name "$PROFILE" \
    --role-name "$ROLE"

  echo "[OK] Role attached"
else
  echo "[OK] Role already attached"
fi

########################################
# DONE
########################################
echo "[DONE] EC2 restore IAM role is READY and SAFE"


create_rds_import_role.sh

#!/usr/bin/env bash
set -euo pipefail

ROLE="temp-rds-s3-import-role"
POLICY_ARN="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"

echo "[INFO] Ensuring RDS import role exists: $ROLE"

cat > trust.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "Service": "rds.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }]
}
EOF

if ! aws iam get-role --role-name "$ROLE" >/dev/null 2>&1; then
  aws iam create-role \
    --role-name "$ROLE" \
    --assume-role-policy-document file://trust.json

  echo "[OK] Role created"
else
  echo "[OK] Role already exists – updating trust policy"
  aws iam update-assume-role-policy \
    --role-name "$ROLE" \
    --policy-document file://trust.json
fi

echo "[INFO] Ensuring S3 read policy attached"

if ! aws iam list-attached-role-policies \
  --role-name "$ROLE" \
  --query 'AttachedPolicies[].PolicyArn' \
  --output text | grep -q "$POLICY_ARN"; then

  aws iam attach-role-policy \
    --role-name "$ROLE" \
    --policy-arn "$POLICY_ARN"

  echo "[OK] Policy attached"
else
  echo "[OK] Policy already attached"
fi

echo "[DONE] RDS S3 import role ready"


user-data.sh

#!/bin/bash
set -euxo pipefail

# Log everything for debugging
exec > /var/log/user-data.log 2>&1

echo "[INFO] OS release"
cat /etc/os-release

# Set Timezone
TIMEZONE="Asia/Dhaka"
ln -sf /usr/share/zoneinfo/$TIMEZONE /etc/localtime
echo "$TIMEZONE" > /etc/timezone

# Update system
yum -y update
yum -y install unzip zstd curl wget

# AWS CLI v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o /tmp/awscliv2.zip
unzip -q /tmp/awscliv2.zip -d /tmp
/tmp/aws/install
aws --version

# Install Percona release package
yum -y install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

# IMPORTANT: Setup Percona Distribution repo for MySQL 8.0
sudo percona-release enable pxb-80

# Non-interactive install of XtraBackup
yum -y install percona-xtrabackup-80

# Verify installation
xtrabackup --version

# Create restore directory
mkdir -p /restore
chown ec2-user:ec2-user /restore

echo "[DONE] AmazonLinux-2 bootstrap complete"


launch_restore_ec2.sh

#!/usr/bin/env bash
set -euo pipefail

########################################
# CONFIGURATION
########################################
AMI="ami-05edef9230865e65c"   # Amazon Linux 2
SUBNET="subnet-************"
SG="sg-**********"
PROFILE="temp-ec2-xtrabackup-restore-profile"
REGION="ap-southeast-1"

KEY_NAME="temp-ec2-xtrabackup-key-$(date +%s)"
KEY_FILE="$HOME/$KEY_NAME.pem"

SSH_USER="ec2-user"
USER_DATA_LOG="/var/log/user-data.log"
DONE_MARKER="\[DONE\]"

########################################
# CREATE KEY PAIR
########################################
echo "[INFO] Creating EC2 key-pair: $KEY_NAME"

aws ec2 create-key-pair \
  --region "$REGION" \
  --key-name "$KEY_NAME" \
  --query 'KeyMaterial' \
  --output text > "$KEY_FILE"

chmod 400 "$KEY_FILE"

########################################
# LAUNCH EC2
########################################
echo "[INFO] Launching EC2 instance"

INSTANCE_ID=$(aws ec2 run-instances \
  --region "$REGION" \
  --image-id "$AMI" \
  --instance-type t3a.medium \
  --subnet-id "$SUBNET" \
  --security-group-ids "$SG" \
  --associate-public-ip-address \
  --key-name "$KEY_NAME" \
  --iam-instance-profile Name="$PROFILE" \
  --user-data file://user-data.sh \
  --block-device-mappings '[{
      "DeviceName": "/dev/xvda",
      "Ebs": {
          "VolumeSize": 50,
          "VolumeType": "gp3",
          "DeleteOnTermination": true
      }
  }]' \
  --tag-specifications '[{
    "ResourceType": "instance",
    "Tags": [
      {"Key": "Name", "Value": "xtrabackup-rds-restore"},
      {"Key": "AutoTerminate", "Value": "true"},
      {"Key": "Purpose", "Value": "mysql-xtrabackup-restore"},
      {"Key": "TTL", "Value": "24h"}
    ]
  }]' \
  --query 'Instances[0].InstanceId' \
  --output text
)

echo "[INFO] EC2 Instance ID: $INSTANCE_ID"

########################################
# WAIT FOR INSTANCE RUNNING
########################################
echo "[INFO] Waiting for EC2 to enter RUNNING state..."

aws ec2 wait instance-running \
  --region "$REGION" \
  --instance-ids "$INSTANCE_ID"

########################################
# FETCH PUBLIC IP
########################################
PUBLIC_IP=$(aws ec2 describe-instances \
  --region "$REGION" \
  --instance-ids "$INSTANCE_ID" \
  --query "Reservations[0].Instances[0].PublicIpAddress" \
  --output text)

if [[ -z "$PUBLIC_IP" || "$PUBLIC_IP" == "None" ]]; then
  echo "[ERROR] Public IP not assigned"
  exit 1
fi

echo "[INFO] EC2 Public IP: $PUBLIC_IP"

########################################
# WAIT FOR SSH
########################################
echo "[INFO] Waiting for SSH to become available..."

until ssh -o StrictHostKeyChecking=no \
          -o ConnectTimeout=5 \
          -i "$KEY_FILE" \
          "$SSH_USER@$PUBLIC_IP" "echo SSH_OK" >/dev/null 2>&1; do
  sleep 5
done

echo "[SUCCESS] SSH is available"

########################################
# MONITOR USER-DATA (SAFE, NON-BLOCKING)
########################################
echo "========================================"
echo "[INFO] Monitoring user-data progress"
echo "========================================"

ssh -o StrictHostKeyChecking=no \
    -i "$KEY_FILE" \
    "$SSH_USER@$PUBLIC_IP" <<EOF
set -e

LOG="$USER_DATA_LOG"
DONE_MARKER="$DONE_MARKER"

# Wait for log file
while [[ ! -f "\$LOG" ]]; do
  sleep 2
done

LAST_LINE=0

while true; do
  TOTAL_LINES=\$(wc -l < "\$LOG")

  if [[ "\$TOTAL_LINES" -gt "\$LAST_LINE" ]]; then
    sed -n "\$((LAST_LINE+1)),\$TOTAL_LINES p" "\$LOG"
    LAST_LINE="\$TOTAL_LINES"
  fi

  if grep -q "\$DONE_MARKER" "\$LOG"; then
    echo "----------------------------------------"
    echo "[INFO] User-data execution completed"
    break
  fi

  sleep 5
done
EOF

########################################
# FINAL OUTPUT
########################################
echo "========================================"
echo "[READY] EC2 restore host is fully initialized"
echo "----------------------------------------"
echo "Instance ID : $INSTANCE_ID"
echo "Public IP  : $PUBLIC_IP"
echo ""
echo "SSH command:"
echo "ssh -i $KEY_FILE $SSH_USER@$PUBLIC_IP"
echo ""
echo "Key file   : $KEY_FILE"
echo "========================================"


on_ec2_restore_prepare_upload_and_terminate.sh

#!/usr/bin/env bash
set -euo pipefail

export AWS_PAGER=""

########################################
# CONFIGURATION
########################################
REGION="ap-southeast-1"

RAW_BUCKET="mysql-xtrabackup-backups"
RAW_PREFIX="raw/"

PREP_BUCKET="mysql-xtrabackup-backups"
PREP_PREFIX="prepared/full-$(date +%F_%H%M%S)"

RESTORE_DIR="/restore"

########################################
# SETUP
########################################
mkdir -p "$RESTORE_DIR"
chmod 750 "$RESTORE_DIR"
cd "$RESTORE_DIR"

INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)

if [[ -z "$INSTANCE_ID" ]]; then
  echo "[ERROR] Unable to determine EC2 instance ID"
  exit 1
fi

echo "[INFO] EC2 Instance ID: $INSTANCE_ID"

########################################
# FIND LATEST RAW BACKUP
########################################
echo "[INFO] Locating latest raw backup in S3..."

LATEST_OBJECT=$(aws s3api list-objects-v2 \
  --region "$REGION" \
  --bucket "$RAW_BUCKET" \
  --prefix "$RAW_PREFIX" \
  --query 'reverse(sort_by(Contents,&LastModified))[0].Key' \
  --output text)

if [[ -z "$LATEST_OBJECT" || "$LATEST_OBJECT" == "None" ]]; then
  echo "[ERROR] No raw backups found"
  exit 1
fi

echo "[INFO] Using backup: s3://$RAW_BUCKET/$LATEST_OBJECT"

########################################
# DOWNLOAD + EXTRACT
########################################
echo "[INFO] Downloading and extracting xbstream..."

aws s3 cp "s3://${RAW_BUCKET}/${LATEST_OBJECT}" - \
  | xbstream -x

########################################
# VERIFY METADATA
########################################
if [[ ! -f xtrabackup_info && ! -f xtrabackup_info.zst ]]; then
  echo "[ERROR] xtrabackup_info missing — invalid backup"
  exit 1
fi

########################################
# DECOMPRESS + PREPARE
########################################
echo "[INFO] Decompressing backup"
xtrabackup --decompress --remove-original --target-dir="$RESTORE_DIR"

echo "[INFO] Preparing backup (redo apply)"
xtrabackup --prepare --target-dir="$RESTORE_DIR"

########################################
# UPLOAD PREPARED BACKUP
########################################
echo "[INFO] Uploading prepared backup to S3"
aws s3 sync "$RESTORE_DIR" "s3://${PREP_BUCKET}/${PREP_PREFIX}"

echo "[SUCCESS] Prepared backup uploaded to s3://${PREP_BUCKET}/${PREP_PREFIX}"

########################################
# SELF-TERMINATION
########################################
echo "[INFO] EC2 job completed — requesting self-termination"

aws ec2 terminate-instances \
  --region "$REGION" \
  --instance-ids "$INSTANCE_ID"

echo "[DONE] EC2 termination requested"


restore_to_rds_and_monitor.sh

#!/usr/bin/env bash
set -euo pipefail

export AWS_PAGER=""

########################################
# CONFIGURATION
########################################
REGION="ap-southeast-1"

DB_ID="rds-mysql-8044"
DB_SUBNET_GROUP="db-*************"
VPC_SG_IDS="sg-**************"

BUCKET="mysql-xtrabackup-backups"
PREFIX="prepared/"

IMPORT_ROLE_NAME="temp-rds-s3-import-role"

MASTER_USER="mysqladmin"
MASTER_PASS="MySQLadmin123!"

########################################
# AUTO-DETECT ACCOUNT
########################################
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
IMPORT_ROLE_ARN="arn:aws:iam::${ACCOUNT_ID}:role/${IMPORT_ROLE_NAME}"

echo "[INFO] Account ID       : $ACCOUNT_ID"
echo "[INFO] Import Role ARN : $IMPORT_ROLE_ARN"

########################################
# RESTORE RDS
########################################
echo "[STEP 1] Starting RDS restore..."

aws rds restore-db-instance-from-s3 \
  --region "$REGION" \
  --db-instance-identifier "$DB_ID" \
  --engine mysql \
  --engine-version 8.0.44 \
  --source-engine mysql \
  --source-engine-version 8.0.44 \
  --db-instance-class db.t4g.medium \
  --allocated-storage 40 \
  --db-subnet-group-name "$DB_SUBNET_GROUP" \
  --vpc-security-group-ids "$VPC_SG_IDS" \
  --s3-bucket-name "$BUCKET" \
  --s3-prefix "$PREFIX" \
  --s3-ingestion-role-arn "$IMPORT_ROLE_ARN" \
  --master-username "$MASTER_USER" \
  --master-user-password "$MASTER_PASS"

########################################
# WAIT FOR RDS
########################################
echo "[STEP 2] Waiting for RDS to become available..."

while true; do
  STATUS=$(aws rds describe-db-instances \
    --region "$REGION" \
    --db-instance-identifier "$DB_ID" \
    --query "DBInstances[0].DBInstanceStatus" \
    --output text)

  echo "[INFO] RDS status: $STATUS"

  if [[ "$STATUS" == "available" ]]; then
    echo "[SUCCESS] RDS restore completed"
    break
  fi

  if [[ "$STATUS" == "failed" ]]; then
    echo "[ERROR] RDS restore failed"
    exit 1
  fi

  sleep 60
done


cleanup_restore_resources.sh

#!/usr/bin/env bash
#!/usr/bin/env bash
set -euo pipefail

########################################
# CONFIGURATION
########################################
EC2_ROLE="temp-ec2-xtrabackup-restore-role"
EC2_PROFILE="temp-ec2-xtrabackup-restore-profile"

RDS_ROLE="temp-rds-s3-import-role"
RDS_MANAGED_POLICY_ARN="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"

########################################
# REQUIRE SSH KEY NAME (MANDATORY)
########################################
KEY_NAME="${1:-}"

if [[ -z "$KEY_NAME" ]]; then
  echo "----------------------------------------"
  echo "[REQUIRED INPUT] SSH key pair name"
  echo "----------------------------------------"
  read -rp "Enter EC2 SSH key pair name to delete: " KEY_NAME
fi

if [[ -z "$KEY_NAME" ]]; then
  echo "[FATAL] SSH key pair name is mandatory. Exiting."
  exit 1
fi

echo "[INFO] SSH key pair to clean up: $KEY_NAME"

########################################
# CONFIRMATION GATE
########################################
echo "----------------------------------------"
echo "This will DELETE the following resources:"
echo "  - EC2 IAM role        : $EC2_ROLE"
echo "  - EC2 instance profile: $EC2_PROFILE"
echo "  - RDS import role    : $RDS_ROLE"
echo "  - EC2 SSH key pair   : $KEY_NAME"
echo "----------------------------------------"

read -rp "Type DELETE to continue: " CONFIRM

if [[ "$CONFIRM" != "DELETE" ]]; then
  echo "[SAFE EXIT] Cleanup aborted by user"
  exit 0
fi

########################################
# EC2 RESTORE ROLE CLEANUP
########################################
echo "========================================"
echo "[EC2] Cleaning up EC2 restore IAM role"
echo "========================================"

if aws iam get-role --role-name "$EC2_ROLE" >/dev/null 2>&1; then
  for POLICY in $(aws iam list-role-policies \
      --role-name "$EC2_ROLE" \
      --query 'PolicyNames[]' \
      --output text); do
    aws iam delete-role-policy \
      --role-name "$EC2_ROLE" \
      --policy-name "$POLICY"
    echo "[OK] Deleted inline policy: $POLICY"
  done
else
  echo "[OK] EC2 role does not exist"
fi

if aws iam get-instance-profile \
    --instance-profile-name "$EC2_PROFILE" >/dev/null 2>&1; then

  for ROLE in $(aws iam get-instance-profile \
      --instance-profile-name "$EC2_PROFILE" \
      --query 'InstanceProfile.Roles[].RoleName' \
      --output text); do
    aws iam remove-role-from-instance-profile \
      --instance-profile-name "$EC2_PROFILE" \
      --role-name "$ROLE"
    echo "[OK] Detached role: $ROLE"
  done

  aws iam delete-instance-profile \
    --instance-profile-name "$EC2_PROFILE"
  echo "[OK] Instance profile deleted"
else
  echo "[OK] Instance profile does not exist"
fi

if aws iam get-role --role-name "$EC2_ROLE" >/dev/null 2>&1; then
  aws iam delete-role --role-name "$EC2_ROLE"
  echo "[OK] EC2 restore role deleted"
else
  echo "[OK] EC2 restore role already deleted"
fi

########################################
# RDS S3 IMPORT ROLE CLEANUP
########################################
echo "========================================"
echo "[RDS] Cleaning up RDS S3 import role"
echo "========================================"

if aws iam get-role --role-name "$RDS_ROLE" >/dev/null 2>&1; then

  ATTACHED=$(aws iam list-attached-role-policies \
    --role-name "$RDS_ROLE" \
    --query "AttachedPolicies[?PolicyArn=='$RDS_MANAGED_POLICY_ARN'].PolicyArn" \
    --output text)

  if [[ -n "$ATTACHED" ]]; then
    aws iam detach-role-policy \
      --role-name "$RDS_ROLE" \
      --policy-arn "$RDS_MANAGED_POLICY_ARN"
    echo "[OK] Detached managed policy"
  fi

  for POLICY in $(aws iam list-role-policies \
      --role-name "$RDS_ROLE" \
      --query 'PolicyNames[]' \
      --output text); do
    aws iam delete-role-policy \
      --role-name "$RDS_ROLE" \
      --policy-name "$POLICY"
    echo "[OK] Deleted inline policy: $POLICY"
  done

  aws iam delete-role --role-name "$RDS_ROLE"
  echo "[OK] RDS import role deleted"
else
  echo "[OK] RDS import role does not exist"
fi

########################################
# EC2 KEY PAIR CLEANUP (MANDATORY)
########################################
echo "========================================"
echo "[EC2] Cleaning up key pair: $KEY_NAME"
echo "========================================"

if aws ec2 describe-key-pairs --key-names "$KEY_NAME" >/dev/null 2>&1; then
  aws ec2 delete-key-pair --key-name "$KEY_NAME"
  echo "[OK] EC2 key pair deleted"
else
  echo "[WARN] Key pair not found in AWS — skipping AWS delete"
fi

if [[ -f "$HOME/$KEY_NAME.pem" ]]; then
  rm -f "$HOME/$KEY_NAME.pem"
  echo "[OK] Local key file removed"
else
  echo "[INFO] Local key file not found — skipping"
fi

########################################
# DONE
########################################
echo "========================================"
echo "[DONE] All restore-related resources cleaned up safely"
echo "========================================"


Final Thoughts

This architecture intentionally separates heavy data processing from orchestration:

* The EC2 worker performs compute-intensive preparation and then terminates itself
* The control plane handles RDS restore and monitoring
* Amazon S3 serves as the only durable handoff point

This approach results in:

* Lower cost
* Reduced security exposure
* Easier auditing
* Clear operational boundaries

Comments