[Gluster-users] Big I/O or gluster proccess problem
Tejas N. Bhise
tejas at gluster.com
Thu May 20 13:10:33 UTC 2010
If I am not mistaken, you have a single server for glusterfs, and this is mirrored to a second ( non-glusterFS ) server using DRBD. If you have only a single server to export data from, why use Glusterfs ? Also, we don't officially support DRBD replication with glusterFS backend.
Maybe you can consider GlusterFS replication across the two servers ?
----- Original Message -----
From: "Ran" <smtp.test61 at gmail.com>
To: "Tejas N. Bhise" <tejas at gluster.com>
Sent: Thursday, May 20, 2010 6:30:21 PM
Subject: Re: [Gluster-users] Big I/O or gluster proccess problem
Tejas hi ,
The 2 servers DRBD pair with HA so gluster actualy has 1 server that export 1 H.D ( 1 TB) this H.D is DRBD'd to the other server
also this 1 TB hd is raid 1 with linux raid (i know its not optimal but rebust) in this setup if 1 servers go down the other continue , DRBD is more rebust then gluster replication specialy for VPS's etc..
I didnt check iowait but the load of the server is about 5 while the CPU's are 10-50% only so that says it all(there are IO waits)
I was thinking of breaking the raid 1 seens this H.D allready has full mirror with DRBD( to server 2) but im not sure it will resolve this problem seens with NFS its not the same , it slows thigs down but not to not functional .
client vol file 192.168.0.9 is the HA IP of this pair iv tested also with plain config(no writebehind etc)
<>
# file: /etc/glusterfs/glusterfs.vol
volume storage1-2
type protocol/client
option transport-type tcp
option remote-host 192.168.0.9
option remote-subvolume b1
option ping-timeout 120
option username ..........
option password ..........
end-volume
volume cluster
type cluster/distribute
option lookup-unhashed yes
subvolumes storage1-2
end-volume
#volume writebehind
# type performance/write-behind
# option cache-size 3MB
# subvolumes cluster
#end-volume
#volume readahead
# type performance/read-ahead
# option page-count 4
# subvolumes writebehind
#end-volume
volume iothreads
type performance/io-threads
option thread-count 4
subvolumes cluster
end-volume
volume io-cache
type performance/io-cache
option cache-size 128MB
option page-size 256KB #128KB is default option
option force-revalidate-timeout 10 # default is 1
subvolumes iothreads
end-volume
volume writebehind
type performance/write-behind
option aggregate-size 512KB # default is 0bytes
option flush-behind on # default is 'off'
subvolumes io-cache
end-volume
<>
server vol file
<>
# file: /etc/glusterfs/glusterfs-server.vol
volume posix
type storage/posix
option directory /data/gluster
# option o-direct enable
option background-unlink yes
# option span-devices 8
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume b1
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport.socket.nodelay on
option transport-type tcp
# option auth.addr.b1.allow *
option auth.login.b1.allow ..........
option auth.login.gluster.password ................
subvolumes b1
end-volume
<>
2010/5/20 Tejas N. Bhise < tejas at gluster.com >
Ran,
Can you please elaborate on - "2 servers in distrebute mod , each has 1 TB brick that replicate to each other using DRBD"
Also, how many drives do you have and what does iowait look like when you write a big file ? Tell us more about the configs of your servers, share the volume files.
Regards,
Tejas.
----- Original Message -----
From: "Ran" < smtp.test61 at gmail.com >
To: Gluster-users at gluster.org
Sent: Thursday, May 20, 2010 4:49:52 PM
Subject: [Gluster-users] Big I/O or gluster proccess problem
Hi all ,
Our problem is simple but quite critical i posted few mounts ago regarding
that issue and there was a good responses but not a fix
What happen is that gluster stack when there is a big wrtite to it .
For example time dd if=/dev/zero of=file bs=10240 count=1000000
OR mv 20gig_file.img into gluster mount .
When that happen the all storage freezes for the entire proccess , mails ,
few vps's , simple dir etc..
Our setup is quite simple at this point , 2 servers in distrebute mod , each
has 1 TB brick that replicate to each other using DRBD .
iv monitored closly everting on this big writes and noticed that its not mem
, proc , network problem .
Iv also checked the same setup with simple NFS and its not happening there .
Anyone has any idea on how to fix this , if you cant write big files into
gluster without making the storage unfunctional then you cant realy do
anything .
Please advise ,
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list