Jump to Navigation

MySQL: Backups with mysqldump

This is a guide about creating MySQL backups using mysqldump.

Using mysqldump has the following characteristics:

  • Online backup;
  • Full backup;
  • Selection of databases and tables;
  • For both MyISAM and InnoDB tables;
  • Can be run remotely;
  • Read lock during the dump, unless only InnoDB tables are dumped.

Creating a backup

To create a consistent, full backup, on a database with MyISAM and/or InnoDB tables, use:

# On the command line
# Set the connection information and authorisation for the local or remote server
CONN="-u root -proot -hlocalhost"
mysqldump $CONN --all-databases --opt --force --lock-all-tables --flush-logs \
--add-drop-database --flush-privileges --allow-keywords --hex-blob

This will dump all user-defined databases as well as the mysql database. The performance_scheme and information_scheme databases are not included. Note that during the dump, a read lock will be set on all databases and all objects: queries that access information will work, queries that try to change information will hang or fail.

The output will go to stdout. Redirect it to a file, optionally piping it through some compression program first:

# On the command line
mysqldump .... | gzip > dump.sql.gz

Options used:

  • --all-databases: include all databases in the dump, except for performance_scheme and information_scheme.
  • --opt: optional as of MySQL 5.6. This sets several sane defaults:
    • --add-drop-table: add drop table statements to the dump before each create table statement.
    • --add-locks: add lock table statements to the dump. This will load the tables faster.
    • --create-options: include MySQL-specific create table statement options in the dump.
    • --disable-keys: add disable keys statements to the dump, in order to create indexes only after inserting records. This will load the table faster.
    • --extended-insert: use multi-row insert statements in the dump. This will load the table faster.
    • --lock-tables: while dumping lock each database separately using a read-only lock. This will enforce consistency for each database, but not between databases.
    • --quick: while dumping handle large tables efficiently.
    • --set-charset: add a set names statement to the dump. This will ensure character data will be interpreted correctly.
  • --force: continue on errors, especially useful for invalid views
  • --lock-all-tables: while dumping lock all databases using a read-only lock. This overrules --lock-tables and will ensure consistency between all objects on all databases.
  • --flush-logs: flush the logs before generating the dump. This ensures a new binary log is started, which may roll-forwards easier.
  • --add-drop-database: add drop database statements to the dump before each create database statement
  • --flush-privileges: Add a flush privileges statement to the dump, to reload all privileges included in the mysql database.
  • --allow-keywords: handle column names that are keywords correctly in the dump
  • --hex-blob: use hexadecimal numbers for binary fields instead of character strings. Such character strings may contain invalid utf-8 sequences; though MySQL handles them correctly, other programs may not.

Events, procedures, functions will be automatically included because the mysql database is also dumped.

If you are using GTIDs (available as of MySQL 5.6), you make get a scary looking warning that you can safely ignore:

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events. 

Backing up only a few databases

If you want to dump only one or more databases not including the mysql database, you have to add a few other options:

# On the command line
# Set the connection information and authorisation for the local or remote server
CONN="-u root -proot -hlocalhost"
# Set the databases to dump
DATABASES="db1 db2 db3"
mysqldump $CONN --opt --force --lock-all-tables --flush-logs \
--add-drop-database --allow-keywords --hex-blob --events --routines \
--databases $DATABASES

This adds the following options:

  • --events: dump all events in the databases to dump
  • --routines: dump all stored functions and procdures in the databases to dump
  • --databases: indicate that all non-option parameters are database names. The databases to dump are added at the end.

Option --flush-privileges has been removed, as the mysql database is not changed directly.

If you are using GTIDs, you will once again get the scary looking message. Again, ignore it.

Backing up only InnoDB tables

If you only have InnoDB tables to dump (note that the mysql database always uses MyISAM) and you are sure that during the dump no DDL statements will be executed on tables, you  can get away with less locking:

# On the command line
# Set the connection information and authorisation for the local or remote server
CONN="-u root -proot -hlocalhost"
# Set the databases to dump
DATABASES="db1 db2 db3"
mysqldump $CONN --opt --force --single-transaction --flush-logs \
--add-drop-database --allow-keywords --hex-blob --events --routines \
--databases $DATABASES

This adds the following options:

  • --single-transaction (replaces --lock-all-tables): use a REPEATABLE READ transaction without locking to create the dump.

If you are using GTIDs, you will once again get the scary looking message. Again, ignore it.

Backing up in a replication setting

It is possible for a master server to add information to the dump that helps creating a new slave; or for a slave server to add information to the dump that helps creating another slave. If you are using GTIDs, you need neither one.

To record the binary log position of a master server (or any other server which have binlogs enabled), use the --master-data option. To record the binary log position o--dump-slave=1--dump-slave=1f the master of a slave, use the --dump-slave option. In both cases, you can set these options to 2 (only record in remarks) or 1 (add a change master to statement). It is not possible to use both options.

Use --master-data=1 if you will use the dump to create a slave of the server you create the dump for.

Use --dump-slave=1 if you want to use the dump to create another slave using the same master as the server you create the dump for.

Use neither if you are using GTIDs (available as of MySQL 5.6).

Restoring a backup

To restore a backup, use:

# On the command line
# Set the connection information and authorisation for the local or remote server
CONN="-u root -proot -hlocalhost"
DUMPFILE=/tmp/dump.sql
mysql $CONN --force < $DUMPFILE

If your dump is compressed, first decompress it:

# On the command line
# Set the connection information and authorisation for the local or remote server
CONN="-u root -proot -hlocalhost"
DUMPFILE=/tmp/dump.sql.gz
zcat $DUMPFILE | mysql $CONN --force

If any of the above backup methods was used, this will completely remove the database(s) and restore them to the situation at the time of the backup.

Note that no binary logs are written for the modifications made by the restore.

Restoring using GTIDs

If you are using GTIDs you also have to determine what has to do with them.

If this is essentially a restore of a full backup, if you are creating a new slave or if you want to do a roll forward, you have to register the GTID status at the time of the backup. The only way to do that is to start with an empty list of GTID transactions.

# In MySQL
RESET MASTER;
# Now import the dump from the command line

Note that this will remove all your binary logs from disk. Be especially careful if you wanted to use them to do a roll forward.

Alternatively, you can choose to just ignore the GTIDs. You will get a warning during the import, but the list of already executed GTIDs will not change:

ERROR 1840 (HY000) at line 24: @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

This will not lead to any problems, unless the server is or will be part of a replication structure or if you want to apply binary logs for a roll forward (and for the latter there is a work-around).

Creating a slave

If you need to create a new slave server, restore as described above.

If you are using GTIDs, make sure you executed the RESET MASTER before the restore. After the restore, connect to the master server using a CHANGE MASTER with MASTER_AUTO_POSITION=1.

If you are not using GTIDs, and you created the dump with --master-data=1 or --dump-slave=1, the position in the master logs should have been set automatically.

Tags:


Technical_article | by Dr. Radut