[Gluster-devel] memory usage (client)
Harris Landgarten
harrisl at lhjonline.com
Thu Jul 5 15:07:27 UTC 2007
Rheas,
My setup is very similar to yours but I am not using io-threads on the client (only on servers) and I have 2 bricks. This is my top
4522 root 15 0 14812 5420 848 S 0.0 0.3 0:11.03 glusterfs
Quite a difference.
Harris
----- Original Message -----
From: "Rhesa Rozendaal" <gluster at rhesa.com>
To: "gluster-devel" <gluster-devel at nongnu.org>
Sent: Thursday, July 5, 2007 10:57:41 AM (GMT-0500) America/New_York
Subject: [Gluster-devel] memory usage (client)
Hi guys,
I've been trying to limit glusterfs' memory consumption, but so far not much luck.
here's a snapshot of my "top":
6697 root 15 0 369m 295m 876 S 45 14.6 3:10.13 [glusterfs]
And it keeps growing, so I'm not sure where it'll settle. Is there anything I
can do to keep it to around 100m?
Here's my current client config (having played a lot with thread-count,
cache-size, etc):
volume ns
type protocol/client
option transport-type tcp/client
option remote-host nfs-deb-03
option remote-subvolume ns
end-volume
volume client01
type protocol/client
option transport-type tcp/client
option remote-host nfs-deb-03
option remote-subvolume brick01
end-volume
# snip client02 through client31
volume export
type cluster/unify
subvolumes client01 client02 client03 client31
option namespace ns
option scheduler alu
option alu.limits.min-free-disk 1GB
option alu.order
disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
end-volume
volume iothreads
type performance/io-threads
option thread-count 4
option cache-size 16MB
subvolumes export
end-volume
volume readahead
type performance/read-ahead
option page-size 4096
option page-count 16
subvolumes iothreads
end-volume
volume writeback
type performance/write-behind
option aggregate-size 131072
option flush-behind on
subvolumes readahead
end-volume
Rhesa
_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
More information about the Gluster-devel
mailing list