[Gluster-users] 3.3.0 replica performance

Andrew andrew at donehue.net
Fri Oct 5 11:50:16 UTC 2012


Hi All,

I have two systems connected via 10GBE .  Hardware is new and performs 
well (more details below).  I am hitting problems with write 
performance.  I have spent a few days reviewing previous posts without 
success. Any advice would be greatly appreciated.


Hardware:
4 AMD CPU cores available
6GB RAM allocated
LSI raid card (drives in RAID 5, 6 x 10krpm drives)
intel 10GBE network cards (currently in cross over between servers with 
a CAT6A cable)

Network performance is reasonable:
[  3]  0.0-10.0 sec  8.21 GBytes  7.05 Gbits/sec
MTU is 9000

Direct write to file system is good (a dd twice the size of the ram):

dd if=/dev/zero bs=1M of=zero.dat count=12000
12000+0 records in
12000+0 records out
12582912000 bytes (13 GB) copied, 20.673 s, 609 MB/s

Write over the gluster mount (I am hoping to achieve around 300MB/sec +):

dd if=/dev/zero bs=1M of=zero.dat count=12000
12000+0 records in
12000+0 records out
12582912000 bytes (13 GB) copied, 111.19 s, 113 MB/s

The above figure looks just like what I would expect off 1Gbit - however 
it is definitely at 10GBE link (running around 7gbit/sec)

CPU doesn't max out, however it does go higher than I would expect for 
111MB/sec.  I can see total use go to around 260% (out of 4 cores... so 
it could go higher to 400%)
There is no real disk wait (the raid card has caching and it is enabled)


kernel:
uname -r
3.2.29

gluster:
  glusterfs -V
glusterfs 3.3.0 built on Jun  6 2012 07:50:10

volume info:
gluster volume info

Volume Name: vg0lv1
Type: Replicate
Volume ID: 0375372e-c8a8-46ce-b152-d7575b0096ab
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.21.0.1:/mnt/vg1/lv1
Brick2: 10.21.0.2:/mnt/vg1/lv1
Options Reconfigured:
performance.cache-size: 1024MB
performance.write-behind-window-size: 128MB
performance.io-thread-count: 64
performance.flush-behind: on

filesystem was created with -I 512 (it is ext4)

volfile info:
volume remote1
   type protocol/client
   option transport-type tcp
   option remote-host 10.21.0.1
   option remote-port 24007
   option remote-subvolume /mnt/vg1/lv1
   option transport.socket.nodelay on
end-volume

volume remote2
   type protocol/client
   option transport-type tcp
   option remote-host 10.21.0.2
   option remote-port 24007
   option remote-subvolume /mnt/vg1/lv1
   option transport.socket.nodelay on
end-volume

volume replicate
   type cluster/replicate
   subvolumes remote1 remote2
end-volume

volume writebehind
   type performance/write-behind
   option window-size 128MB
  option flush-behind on
   subvolumes replicate
end-volume


volume iothreads
   type performance/io-threads
   option thread-count 32
   subvolumes writebehind
end-volume






More information about the Gluster-users mailing list