Skip to content

Instantly share code, notes, and snippets.

@budiantoip
Last active December 12, 2025 06:39
Show Gist options
  • Select an option

  • Save budiantoip/9e89026dcc7ae7fd3e38ea3a30ac71da to your computer and use it in GitHub Desktop.

Select an option

Save budiantoip/9e89026dcc7ae7fd3e38ea3a30ac71da to your computer and use it in GitHub Desktop.
How to speed up process (mysql)

1. Throttle the process (works everywhere)

Limit CPU

nice -n 15 ionice -c2 -n7 mysql -uUSER -p DBNAME < dump.sql

Hard limit CPU % (install cpulimit)

cpulimit -l 40 -- mysql -uUSER -p DBNAME < dump.sql

Hard limit RAM + CPU using systemd-run (best on systemd hosts)

systemd-run --scope -p MemoryMax=4G -p CPUQuota=50% \
  bash -lc 'mysql -uUSER -p DBNAME < dump.sql'

View a running systemd-run:

systemctl list-units 'run-*.scope' --no-pager

2. Speed-friendly but safer import settings (InnoDB)

If it’s a plain SQL dump (not physical backup), these can reduce overhead during import:

Run before import:

SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
SET sql_log_bin=0;

Then import, then after:

COMMIT;
SET unique_checks=1;
SET foreign_key_checks=1;

Notes:

  • foreign_key_checks=0 can massively reduce work, but only do it if you trust the dump’s integrity
  • sql_log_bin=0 only if you don’t want replication to receive this import (and you have privileges)

3. Limit InnoDB "how hard it tries"

These are server settings (temporary if you change them carefully):

  • Cap IO background work
    • innodb_io_capacity
    • innodb_io_capacity_max
  • Avoid huge buffer pool on a shared server
    • innodb_buffer_pool_size (too large can starve the OS / other services)
  • If you’re on a small box, reduce concurrency a bit:
    • innodb_thread_concurrency (older knob, but sometimes helps)
    • also watch max_connections so you don’t have other workload piling on

4. Practical import tricks that prevent "death spirals"

  • Import during low traffic
  • If the dump is huge, split it and import in chunks:
    split -l 200000 dump.sql part_
    for f in part_*; do mysql -uUSER -p DBNAME < "$f"; done
  • If it’s a logical dump produced with mysqldump, restoring tends to be much smoother if the dump was created with:
    • --single-transaction (for InnoDB)
    • --quick
    • and ideally with extended inserts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment