[Gluster-users] Using so many memories and swaps
Masahiro Ohno
ohno-ma at nttpc.co.jp
Mon Apr 18 10:01:12 UTC 2011
Hi, Gluster community,
We introduced GlusterFS 2.0.2 in Dec. 2009 as backend storage of Zimbra
and operating this system. And we're facing a trouble.
1. Construction
This system has three GlusterFS servers for three times replication.
These servers have four FUSE clients. One of them is for management and
monitoring and others are for email services.
A server exports three volumes. One of them is for containing log files
of clients, and others are for email services (two volumes are tied by
clients with cluster/distribute).
I'll show the volfiles of servers at the end of this email.
As hardware, these servers are HP's ProLiant DL380 G6. They contains
12GB of RAM.
As software, OS is RHEL5 and GlusterFS is 2.0.2.
2. Detail of the trouble
a) One of the glusterfsd processes became to use so huge memory and
swap. So we add 55GB of swap but it used 50GB and it was increasing.
b) We add swaps as image files. We did it under online, without
rebooting the OS. We did it every 12GB manually.
c) Under this situation, we restarted the glusterfsd. Then glusterfsd
seemed to restart correctly but the performance has terribly decreased
on GlusterFS clients.
3. What we know
a) Is the behavior of glusterfsd (using so many memories and swaps) correct?
b) On the above 2-c, glusterfsd had been used about 700% of CPU. Is this
behavior correct?
Thanks and regards,
Masahiro Ono
::::::::::::::
/etc/glusterfs/nt001/glusterfsd.vol
::::::::::::::
volume posix
type storage/posix
option directory /mnt/nt001
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume iothreads
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume brick
type performance/io-cache
option cache-size 64MB
option page-size 4MB
subvolumes iothreads
end-volume
volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 30001
subvolumes brick
option auth.addr.brick.allow *
end-volume
::::::::::::::
/etc/glusterfs/nt002/glusterfsd.vol
::::::::::::::
volume posix
type storage/posix
option directory /mnt/nt002
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume iothreads
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume brick
type performance/io-cache
option cache-size 64MB
option page-size 4MB
subvolumes iothreads
end-volume
volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 30002
subvolumes brick
option auth.addr.brick.allow *
end-volume
::::::::::::::
/etc/glusterfs/nt003/glusterfsd.vol
::::::::::::::
volume posix
type storage/posix
option directory /mnt/nt003
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume iothreads
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume brick
type performance/io-cache
option cache-size 64MB
option page-size 4MB
subvolumes iothreads
end-volume
volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 30003
subvolumes brick
option auth.addr.brick.allow *
end-volume
#
---
Masahiro Ono
NTTPC Communications, Inc.
+81 3 6203 2713
ohno-ma at nttpc.co.jp
---
More information about the Gluster-users
mailing list