WiredTiger cursors provide access to data from a variety of sources. One of these sources is the list of files required to perform a backup of the database. The list may be the files required by all of the objects in the database, or a subset of the objects in the database.
WiredTiger backups are "on-line" or "hot" backups, and applications may continue to read and write the databases while a snapshot is taken.
"backup:"
data source, which begins the process of a backup.The directory into which the files are copied may subsequently be specified as an directory to the wiredtiger_open function and accessed as a WiredTiger database home.
Copying the database files for a backup does not require any special alignment or block size (specifically, Linux or Windows filesystems that do not support read/write isolation can be safely read for backups).
The cursor must not be closed until all of the files have been copied, however, there is no requirement the files be copied in any order or in any relationship to the WT_CURSOR::next calls, only that all files have been copied before the cursor is closed. For example, applications might aggregate the file names from the cursor and then list the file names as arguments to a file archiver such as the system tar utility.
During the period the backup cursor is open, database checkpoints can be created, but no checkpoints can be deleted. This may result in significant file growth.
The following is a programmatic example of creating a backup:
In cases where the backup is desired for a checkpoint other than the most recent, applications can discard all checkpoints subsequent to the checkpoint they want using the WT_SESSION::checkpoint method. For example:
The wt backup command may also be used to create backups:
Once a backup has been done, it can be rolled forward incrementally by adding log files to the backup copy. Adding log files to the copy decreases potential data loss from switching to the copy, but increases the recovery time necessary to switch to the copy. To reset the recovery time necessary to switch to the copy, perform a full backup of the database. For example, an application might do a full backup of the database once a week during a quiet period, and then incrementally copy new log files into the backup directory for the rest of the week.
WiredTiger log files are named "WiredTigerLog.[number]" where "[number]" is a 10-digit value, for example, WiredTigerLog.0000000001". The log file with the largest number in its name is the most recent log file written. To incrementally add log files to a backup, simply copy newly created log files from the database home to the backup copy.
Bulk-loads are not commit-level durable, that is, the creation and bulk-load of an object will not appear in the database log files. For this reason, applications doing incremental backups after a full backup should repeat the full backup step after doing a bulk-load to make the bulk-load durable. In addition, incremental backups after a bulk-load can cause recovery to report errors because there are log records that apply to data files which don't appear in the backup.
By default, WiredTiger automatically removes log files no longer required for recovery. Applications wanting to use log files for incremental backup must first disable automatic log file removal using the log=
(archive=false) configuration to wiredtiger_open.
The following is the procedure for incrementally backing up a database and removing log files from the original database home:
Steps 2, 3 and 4 can be repeated any number of times before step 1 is repeated.
Many Linux systems do not support mixing O_DIRECT
and memory mapping or normal I/O to the same file. If O_DIRECT
is configured for data or log files on Linux systems (using the wiredtiger_open direct_io
configuration), any program used to copy files during backup should also specify O_DIRECT
when configuring its file access. Likewise, when O_DIRECT
is not configured by the database application, programs copying files should not configure O_DIRECT
.