To speed up the dump and restore of a big table, you can split it into small chunks. The chunk size can be specified by the number of rows it contains and each chunk will be written into a separate file:
shell> mydumper -u root --password=<password> -B employees -T employees --triggers --events --routines --rows=10000 -t 8 --trx-consistency-only --outputdir /backups/employee_table_chunks
- -t: Specifies the number of threads
- --trx-consistency-only: If you are using only transnational tables, such as InnoDB, using this option will minimize the locking
- --rows: Split the table into chunks of this number of rows
For each chunk, a file is created as <database_name>.<table_name>.<number>.sql; the number ...