[Techtalk] Dump very slow over Gigabit connection

Wim De Smet kromagg at gmail.com
Fri Sep 14 11:45:29 UTC 2007


On 9/14/07, Conor Daly <conor.daly-linuxchix at cod.homelinux.org> wrote:
> I'm using dump to backup a number of filesystems to a tape hosted on a
> remote machine.  The machine being backed up is still in commissioning so
> it hasn't got its full complement of data yet.  Last night, one 300Gb
> filesystem had about 21Gb used.  It took 3.5 hours to dump at an average
> rate of 1530 kB/s (according to dump).  By my calculations, if this
> filesystem fill completely, it will take 57 hours to backup.
> The dump runs like this:
> Remote machine mounts the tape
> local machine does:
> export RSH=ssh
> dump -0 -f remote:/dev/nst0 /filesystem
> local and remote machines are connected by a crossover cable.  Their
> connections show up in dmesg output as running at 1000bit full duplex.  I
> can do:
> scp /path/to/7.1Gb_file remote:
> and get about 40Mb/s throughput which translates to about 2 hours for
> 300Gb.
> Any clue why dump is taking so long?  The filesystem it's dumping has not
> too many files, each file is fairly large (~2Gb).
> Running dump on RHEL 5

Can you check what the compression settings are at? Also, have you
tried doing a local dump? If you start dump, have a look at iostat and
top output to see where it might be spending its time. Personally I
dump with bzip and it really slows things down as it makes the process
partly cpu bound.


More information about the Techtalk mailing list