[Gluster-users] Poor Performance on a replicate client
Rainer Krienke
krienke at uni-koblenz.de
Thu Jun 18 09:19:29 UTC 2009
Hello,
I am just playing around with clusterfs version 2.0.1 on two openSuSE 11.1
systems. I am using the replicator translator. My client and server config are
below this mail. i have two servers and one of the servers acts also as
client. Basically this setup works.
The problem is that performace writing to the replicated volume is really
poor. I tried to copy /etc (running: find /etc/ |cpio -pdmv /replicatedvol)
to the replicating volume and this took 4 minutes. In contrast doing the same
locally on the filesystem takes only 4 seconds. The complete size of /etc is
95MB. So the write performance was about 0,395 MB/sec for glusterfs. Is my
config wrong it seems to me that this performance is very poor? Both servers
are connected by a 100 Mbit/sec switched network that is not buisy.
Another question is about recovery. I killed glusterfd on one of my two
servers (say on server "B") then copied data on the replicated directory from
my client. Next I started the server "B" again I had killed before. My
expectation was, that it finds out that server "A" has data in its replicated
directory that are not yet on "B". But nothing happened. Both servers came
only in sync when I accessed the replicated filesystem via the client eg by
doing an ls . Is this the only way to sync to replicating servers?
Here are my current testing configs. The Server config is running on two
openSuSE11.1 machines, the client config is running on one of the two
machines:
Server:
volume posix
type storage/posix
option directory /cluster
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume
Client:
volume remote1
type protocol/client
option transport-type tcp
option remote-host rzinstal2.uni-koblenz.de
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host bliss.uni-koblenz.de
option remote-subvolume brick
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
option metadata-self-heal on
option data-self-heal on
option entry-self-heal on
end-volume
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume replicateclient
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,Fax: +49261287 1001312
More information about the Gluster-users
mailing list