[Gluster-devel] how best to set up for performance?

Niall Dalton nialldalton at mac.com
Sun Mar 16 13:21:04 UTC 2008


On Mar 16, 2008, at 3:12 AM, Amar S. Tumballi wrote:

> Hey,
>  Just missed that 80GB file size part. Are you sure your disks are  
> fast enough to write/read at more than 200MBps for uncached files?  
> Can you run the dd directly on the backend and make sure you are  
> getting enough disk speed?



Sure thing - good to double check.

# caneland is the client, 192.168.3.2 one of my storage servers
root at caneland:/home/niall# ssh 192.168.3.2

# 192.168.3.2 is a 16GB memory machine
root at 192.168.3.2:~# free -g
              total       used       free     shared    buffers      
cached
Mem:            15         15          0          0          0          
13
-/+ buffers/cache:          2         13
Swap:            0          0          0

# /big is the target file system
root at 192.168.3.2:~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdb1              4.0G   2.0G   1.8G  54% /
varrun                 8.5G   209k   8.5G   1% /var/run
varlock                8.5G      0   8.5G   0% /var/lock
udev                   8.5G    58k   8.5G   1% /dev
devshm                 8.5G      0   8.5G   0% /dev/shm
/dev/sda2              6.5T   4.6M   6.5T   1% /big

# even though ram is only 16GB lets nuke it to make sure there's no  
funny business
root at 192.168.3.2:~# echo "3" > /proc/sys/vm/drop_caches

# dd to write a big file..
root at 192.168.3.2:~# dd if=/dev/zero of=/big/big.file bs=8M count=10000
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 128.927 seconds, 651 MB/s

# we have the file..
root at 192.168.3.2:~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdb1              4.0G   2.0G   1.8G  54% /
varrun                 8.5G   209k   8.5G   1% /var/run
varlock                8.5G      0   8.5G   0% /var/lock
udev                   8.5G    58k   8.5G   1% /dev
devshm                 8.5G      0   8.5G   0% /dev/shm
/dev/sda2              6.5T    84G   6.5T   2% /big


# nuke the caches out of sheer paranoia before the read test
root at 192.168.3.2:~# echo "3" > /proc/sys/vm/drop_caches

# dd to read the big file
root at 192.168.3.2:~# dd if=/big/big.file of=/dev/null bs=8M
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 108.51 seconds, 773 MB/s

# start an iperf server (and in another window do an iperf -c  
192.168.3.2 from the client)
root at 192.168.3.2:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.3.2 port 5001 connected with 192.168.3.1 port 45751
[  4]  0.0-10.0 sec  7.24 GBytes  6.22 Gbits/sec

That could be tuned up I'm sure but its > 796MB/s per storage server  
so shouldn't be the bottleneck yet.







More information about the Gluster-devel mailing list