nice -n 15 ionice -c2 -n7 mysql -uUSER -p DBNAME < dump.sqlcpulimit -l 40 -- mysql -uUSER -p DBNAME < dump.sqlsystemd-run --scope -p MemoryMax=4G -p CPUQuota=50% \
bash -lc 'mysql -uUSER -p DBNAME < dump.sql'systemctl list-units 'run-*.scope' --no-pagerIf it’s a plain SQL dump (not physical backup), these can reduce overhead during import:
Run before import:
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
SET sql_log_bin=0;Then import, then after:
COMMIT;
SET unique_checks=1;
SET foreign_key_checks=1;Notes:
foreign_key_checks=0can massively reduce work, but only do it if you trust the dump’s integritysql_log_bin=0only if you don’t want replication to receive this import (and you have privileges)
These are server settings (temporary if you change them carefully):
- Cap IO background work
innodb_io_capacityinnodb_io_capacity_max
- Avoid huge buffer pool on a shared server
innodb_buffer_pool_size(too large can starve the OS / other services)
- If you’re on a small box, reduce concurrency a bit:
innodb_thread_concurrency(older knob, but sometimes helps)- also watch
max_connectionsso you don’t have other workload piling on
- Import during low traffic
- If the dump is huge, split it and import in chunks:
split -l 200000 dump.sql part_ for f in part_*; do mysql -uUSER -p DBNAME < "$f"; done
- If it’s a logical dump produced with mysqldump, restoring tends to be much smoother if the dump was created with:
--single-transaction(for InnoDB)--quick- and ideally with extended inserts