[Gluster-users] tuning
Papp Tamas
tompos at martos.bme.hu
Thu Jul 7 08:40:41 UTC 2011
On 2011-07-06 23:58, Papp Tamas wrote:
> hi!
>
> I'm almost absolutely new to glusterfs.
>
> Until now we used Storage Platform (3.0.5).
> Today we installed Ubuntu 11.04 and glusterfs 3.2.1.
>
> $ cat w-vol-fuse.vol
> volume w-vol-client-0
> type protocol/client
> option remote-host gl0
> option remote-subvolume /mnt/brick1
> option transport-type tcp
> end-volume
>
> volume w-vol-client-1
> type protocol/client
> option remote-host gl1
> option remote-subvolume /mnt/brick1
> option transport-type tcp
> end-volume
>
> volume w-vol-client-2
> type protocol/client
> option remote-host gl2
> option remote-subvolume /mnt/brick1
> option transport-type tcp
> end-volume
>
> volume w-vol-client-3
> type protocol/client
> option remote-host gl3
> option remote-subvolume /mnt/brick1
> option transport-type tcp
> end-volume
>
> volume w-vol-client-4
> type protocol/client
> option remote-host gl4
> option remote-subvolume /mnt/brick1
> option transport-type tcp
> end-volume
>
> volume w-vol-dht
> type cluster/distribute
> subvolumes w-vol-client-0 w-vol-client-1 w-vol-client-2
> w-vol-client-3 w-vol-client-4
> end-volume
>
> volume w-vol-write-behind
> type performance/write-behind
> option cache-size 4MB
> subvolumes w-vol-dht
> end-volume
>
> volume w-vol-read-ahead
> type performance/read-ahead
> subvolumes w-vol-write-behind
> end-volume
>
> volume w-vol-io-cache
> type performance/io-cache
> option cache-size 128MB
> subvolumes w-vol-read-ahead
> end-volume
>
> volume w-vol-quick-read
> type performance/quick-read
> option cache-size 128MB
> subvolumes w-vol-io-cache
> end-volume
>
> volume w-vol-stat-prefetch
> type performance/stat-prefetch
> subvolumes w-vol-quick-read
> end-volume
>
> volume w-vol
> type debug/io-stats
> option latency-measurement off
> option count-fop-hits off
> subvolumes w-vol-stat-prefetch
> end-volume
>
>
> $ cat w-vol.gl0.mnt-brick1.vol
> volume w-vol-posix
> type storage/posix
> option directory /mnt/brick1
> end-volume
>
> volume w-vol-access-control
> type features/access-control
> subvolumes w-vol-posix
> end-volume
>
> volume w-vol-locks
> type features/locks
> subvolumes w-vol-access-control
> end-volume
>
> volume w-vol-io-threads
> type performance/io-threads
> subvolumes w-vol-locks
> end-volume
>
> volume w-vol-marker
> type features/marker
> option volume-uuid ad362448-7ef0-49ae-b13c-74cb82ce9be5
> option timestamp-file /etc/glusterd/vols/w-vol/marker.tstamp
> option xtime off
> option quota off
> subvolumes w-vol-io-threads
> end-volume
>
> volume /mnt/brick1
> type debug/io-stats
> option latency-measurement off
> option count-fop-hits off
> subvolumes w-vol-marker
> end-volume
>
> volume w-vol-server
> type protocol/server
> option transport-type tcp
> option auth.addr./mnt/brick1.allow *
> subvolumes /mnt/brick1
> end-volume
>
>
> There is 5 nodes. 3 have 8 disks in RAID 6 (supermicro server, are
> controller), 2 have 8 disks in raid5+spare (DL180).
> Filesystem of datas was created via this command (of course a bit
> different on HPs):
>
> mkfs.xfs -b size=4096 -d sunit=256,swidth=1536 -L gluster /dev/sda4
>
>
> The performance is far away that was before. I tried to modify
>
> performance.write-behind-window-size 4MB
> gluster volume set w-vol performance.cache-size 128MB
> gluster volume set w-vol nfs.disable on
>
> echo 512 > /sys/block/sda/queue/nr_requests
> blockdev --setra 16384 /dev/sda
> sysctl -w vm.swappiness=5
> sysctl -w vm.dirty_background_ratio=3
> sysctl -w vm.dirty_ratio=40
> sysctl -w kernel.sysrq=0
>
> Nothing really helped.
> Can somebody give some instructions?
Some more information.
On the node:
$ dd if=/dev/zero of=adsdfgrr bs=128K count=100k oflag=direct
102400+0 records in
102400+0 records out
13421772800 bytes (13 GB) copied, 27.4022 s, 490 MB/s
The same on the cluster volume is ~50-60 MB/s.
Network layer is GE, nodes are connected with two NICs in bonding.
I am absolutely desparated. Is it Ubuntu? Would be better with Fedora?
Or does the Storage Platform run on an optimized kernel or something
like that?
Thank you,
tamas
More information about the Gluster-users
mailing list