Taking daily backups of your MySQL database is an essential task for ensuring data security and preventing data loss. In this blog, we’ll walk you through how to set up a shell script to take daily MySQL backups and delete older backup files more than one week old.
Step 1: Creating the Backup Script
To create the backup script, you need to open a text editor and enter the following code
#!/bin/bash
# Set the date format for the backup filename
DATE=$(date +%Y-%m-%d)
# Set the MySQL credentials
MYSQL_USER="your_mysql_username"
MYSQL_PASSWORD="your_mysql_password"
# Set the directory for storing the backups
BACKUP_DIR="/path/to/backup/folder"
# Create the backup file
BACKUP_FILE="$BACKUP_DIR/$DATE.sql"
mysqldump --user=$MYSQL_USER --password=$MYSQL_PASSWORD --all-databases > $BACKUP_FILE
# Delete backup files older than 7 days
find $BACKUP_DIR/* -mtime +7 -exec rm {} \;
In the code above, we first set the date format for the backup filename using the date
command. Next, we set the MySQL credentials to be used by the script. You should replace “your_mysql_username” and “your_mysql_password” with the actual credentials for your MySQL server.
We then set the directory for storing the backups using the BACKUP_DIR
variable. You should replace “/path/to/backup/folder” with the actual path to the folder where you want to store the backups.
We create the backup file using the mysqldump
command. This command dumps all the databases to a file, which is named after the current date.
Finally, we delete backup files older than 7 days using the find
command. This command searches for files in the backup directory that are older than 7 days and deletes them.
Step 2: Making the Script Executable
Once you have created the backup script, you need to make it executable using the chmod
command. Open the terminal and navigate to the directory where you saved the backup script. Then, enter the following command:
chmod +x backup.sh
This command makes the script executable.
Step 3: Running the Backup Script
To run the backup script, open the terminal and navigate to the directory where you saved the backup script. Then, enter the following command:
./backup.sh
This command executes the backup script, and a new backup file is created in the backup directory.
Step 4: Automating the Backup Process
To automate the backup process, you can use a cron job to run the backup script at a specific time each day. To set up a cron job, open the terminal and enter the following command:
crontab -e
This command opens the cron table in the editor. Add the following line to the end of the file:
0 0 * * * /path/to/backup.sh
This line sets the backup script to run at midnight each day. Replace “/path/to/backup.sh” with the actual path to the backup script.
Step 5: Verifying the Backup
To verify that the backup is working correctly, you can check the backup directory for the presence of the backup file. You should see a new backup file for each day the script runs.
Backup script for Cloud backup storage setups
To update the backup script for remote backup storage setups, you can modify the backup directory to point to a remote storage location instead of a local folder. Here’s how you can modify the script:
Step 1: Set up Remote Storage
To set up remote storage, you can use services like Amazon S3, Google Cloud Storage, or any other cloud storage service that provides an API to upload files. You will need to create an account and set up the necessary credentials to access the remote storage.
Step 2: Install and Configure the AWS CLI
If you are using Amazon S3 for remote storage, you will need to install and configure the AWS CLI on the server where the backup script runs. You can follow the official AWS documentation to install and configure the CLI.
Step 3: Modify the Backup Script
To modify the backup script, you need to change the backup directory to the remote storage location. Here’s an example of how you can modify the script to upload the backup file to an S3 bucket:
#!/bin/bash
# Set the date format for the backup filename
DATE=$(date +%Y-%m-%d)
# Set the MySQL credentials
MYSQL_USER="your_mysql_username"
MYSQL_PASSWORD="your_mysql_password"
# Set the directory for storing the backups
BACKUP_DIR="/path/to/local/folder"
# Create the backup file
BACKUP_FILE="$BACKUP_DIR/$DATE.sql"
mysqldump --user=$MYSQL_USER --password=$MYSQL_PASSWORD --all-databases > $BACKUP_FILE
# Upload backup file to S3
aws s3 cp $BACKUP_FILE s3://your-bucket-name/$DATE.sql
# Delete backup files older than 7 days
find $BACKUP_DIR/* -mtime +7 -exec rm {} \;
In the code above, we added the AWS CLI command aws s3 cp
to upload the backup file to the S3 bucket. Replace “your-bucket-name” with the actual name of the S3 bucket where you want to store the backup file.
Step 4: Test the Backup Script
To test the backup script, you can run the script manually and verify that the backup file is uploaded to the remote storage location.
Step 5: Automate the Backup Process
To automate the backup process, you can set up a cron job to run the backup script at a specific time each day, as described in the previous section.
Remote server instead of a local folder or remote storage
To update the backup script for a remote backup server, you need to modify the script to copy the backup file to the remote server instead of a local folder or remote storage. Here’s how you can modify the script:
Step 1: Set up Remote Backup Server
To set up a remote backup server, you need to have access to a remote server with SSH enabled. You will also need to create a backup directory on the remote server and set up the necessary credentials to access the server.
Step 2: Modify the Backup Script
To modify the backup script, you need to add an SCP command to copy the backup file to the remote server. Here’s an example of how you can modify the script to copy the backup file to a remote server:
#!/bin/bash
# Set the date format for the backup filename
DATE=$(date +%Y-%m-%d)
# Set the MySQL credentials
MYSQL_USER="your_mysql_username"
MYSQL_PASSWORD="your_mysql_password"
# Set the directory for storing the backups
BACKUP_DIR="/path/to/local/folder"
# Create the backup file
BACKUP_FILE="$BACKUP_DIR/$DATE.sql"
mysqldump --user=$MYSQL_USER --password=$MYSQL_PASSWORD --all-databases > $BACKUP_FILE
# Copy backup file to remote server
REMOTE_SERVER="your_remote_server_address"
REMOTE_DIR="/path/to/remote/folder"
scp $BACKUP_FILE $REMOTE_SERVER:$REMOTE_DIR
# Delete backup files older than 7 days
find $BACKUP_DIR/* -mtime +7 -exec rm {} \;
In the code above, we added the SCP command scp
to copy the backup file to the remote server. Replace “your_remote_server_address” with the actual IP address or domain name of the remote server where you want to store the backup file. Replace “/path/to/remote/folder” with the actual directory on the remote server where you want to store the backup file.
Step 3: Test the Backup Script
To test the backup script, you can run the script manually and verify that the backup file is copied to the remote server.
Step 4: Automate the Backup Process
To automate the backup process, you can set up a cron job to run the backup script at a specific time each day, as described in the previous section.
Conclusion
By modifying the backup script to copy the backup file to a remote server, you can ensure that your data is protected in case of local hardware failure or disaster. With the SCP command and remote server access, it’s easy to set up a secure and reliable backup process for your MySQL database.