[Techtalk] Dump very slow over Gigabit connection

Conor Daly conor.daly-linuxchix at cod.homelinux.org
Thu Sep 13 23:08:30 UTC 2007


I'm using dump to backup a number of filesystems to a tape hosted on a
remote machine.  The machine being backed up is still in commissioning so
it hasn't got its full complement of data yet.  Last night, one 300Gb
filesystem had about 21Gb used.  It took 3.5 hours to dump at an average
rate of 1530 kB/s (according to dump).  By my calculations, if this
filesystem fill completely, it will take 57 hours to backup.

The dump runs like this:

Remote machine mounts the tape
local machine does:

export RSH=ssh

dump -0 -f remote:/dev/nst0 /filesystem

local and remote machines are connected by a crossover cable.  Their
connections show up in dmesg output as running at 1000bit full duplex.  I
can do:

scp /path/to/7.1Gb_file remote:

and get about 40Mb/s throughput which translates to about 2 hours for
300Gb.  

Any clue why dump is taking so long?  The filesystem it's dumping has not
too many files, each file is fairly large (~2Gb).

Running dump on RHEL 5

Conor

-- 
Conor Daly <conor.daly at cod.homelinux.org>
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GCS/G/S/O d+(-) s:+ a+ C++(+) UL++++ US++ P>++ L+++>++++ E--- W++ !N
PS+ PE Y+ PGP? tv(-) b+++(+) G e+++(*) h-- r+++ z++++ 
------END GEEK CODE BLOCK------
http://www.geekcode.com/ http://www.ebb.org/ungeek/


More information about the Techtalk mailing list