[Gluster-devel] Performance
Roland Fischer
roland.fischer at xidras.com
Thu Mar 18 07:16:08 UTC 2010
hi Raghavendra G,
i use xen 3.4.1 on glusterfs. the images of virtual maschine are on
glusterfs hosted. i run 16 domUs (xen images) on glusterfs an the disk
speed is awful slow. and in monitoring tool i see that there is a lot of
wait cpu (that means that the cpus of the domUs are waiting on
glusterfs). before i switched on glusterfs the domU images laying on
local disk and there are no wait cpu....
best regards roland
Am 18.03.2010 05:46, schrieb Raghavendra G:
> Hi Roland,
>
> What are the applications you are using on glusterfs? In particular,
> what is the i/o pattern of applications? As a general guideline, you
> can try enabling/disabling each of the performance translators and
> observe the gain/loss of performance and tune the configuration
> accordingly.
>
> regards,
> On Wed, Mar 17, 2010 at 9:05 PM, Roland Fischer
> <roland.fischer at xidras.com <mailto:roland.fischer at xidras.com>> wrote:
>
> Hi Community,
>
> i need your help. i have performance problems with glusterfs 3.0.0
> and domUs (xen)
>
> i use 2 ident glusterfs-server (physikal HW) and two xen-server
> (physikal)
>
> currently i use client side replication - which is awful slow. i
> use a monitoring tool and see in domUs that there is a lot of cpu
> waiting. (before i switch to glusterfs there was no wait CPU)
>
> is server-side-replication faster and failsave. i mean if one
> glusterfs server goes down, does the other take over the domUs?
>
> is there anything in volfiles which i can tune?!? should i use
> server-side-replication?!?
>
> should i use the --disable-direct-io-mode? if yes on server side
> or client-side or both - and how to add in fstab (with
> --disable-direct-io-mode)?????
>
> Thank you for your help!!!
>
> servervolfile:
> cat /etc/glusterfs/export-domU-images-client_repl.vol
> #############
> volume posix
> type storage/posix
> option directory /GFS/domU-images
> end-volume
>
> volume locks
> type features/locks
> subvolumes posix
> end-volume
>
> volume domU-images
> type performance/io-threads
> option thread-count 8 # default is 16
> subvolumes locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option auth.addr.domU-images.allow 192.*.*.*,127.0.0.1
> option transport.socket.listen-port 6997
> subvolumes domU-images
> end-volume
> ######################
>
> clientvolfiles:
>
> cat /etc/glusterfs/mount-domU-images-client_repl.vol
> volume gfs-01-01
> type protocol/client
> option transport-type tcp
> option remote-host hostname
> option transport.socket.nodelay on
> option remote-port 6997
> option remote-subvolume domU-images
> option ping-timeout 5
> end-volume
>
> volume gfs-01-02
> type protocol/client
> option transport-type tcp
> option remote-host hostname
> option transport.socket.nodelay on
> option remote-port 6997
> option remote-subvolume domU-images
> option ping-timeout 5
> end-volume
>
> volume gfs-replicate
> type cluster/replicate
> subvolumes gfs-01-01 gfs-01-02
> end-volume
>
> volume writebehind
> type performance/write-behind
> option cache-size 4MB #default 16
> subvolumes gfs-replicate
> end-volume
>
> volume readahead
> type performance/read-ahead
> option page-count 8 # cache per file = (page-count
> x page-size)
> subvolumes writebehind
> end-volume
>
> volume iocache
> type performance/io-cache
> option cache-size 1GB #new 1GB supported
> option cache-timeout 1
> subvolumes readahead
> end-volume
>
> volume statprefetch
> type performance/stat-prefetch
> subvolumes iocache
> end-volume
>
> #################################################
>
>
> Best regards,
> Roland
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
> Raghavendra G
>
More information about the Gluster-devel
mailing list