[Gluster-devel] glusterfs on Ubuntu 10.04/Rackspace: rampant RAM usage

chris bake cbake at livemercial.com
Thu Aug 26 16:10:25 UTC 2010


Hello,

I'm running GlusterFS v3.0.2  with the native FUSE plugin on 2  
Rackspace VM's each running 4GB Ram  & 160GB HD. (GFS1 and GFS2)  
Available HD space is approx 57% remaining.

glusterfsd and postfix are the only processes running on these 2  
servers, with a total of 6 external clients connected. Each server is  
a client to each other as well. (8 Total clients)

Upon fresh boot of the server, and processes, the total RAM usage is  
very minimal, however after a few hours of uptime, the RAM usage is  
almost completely depleted down to < 100MB on GFS2 and < 20MB on GFS1.

"lsof | grep gfs" reveals 53 connections on GFS1 and 45 on GFS2  from  
the multiple clients.

This doesn't appear to be client related, since resources are minimal  
at boot time, with all connections active. However, I'm not completely  
familiar with the configuration features.

I've just pushed these servers into production, and the websites they  
serve are receiving approximately 50k hits a day total. Yet, this RAM  
issue was present before any real traffic existed. Do I have a config  
error? or am I missing any major performance tuning options?

Any help would be very much appreciated. Thanks,
Chris

TOP:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  2421 root      20   0  612m 419m 1092 S    0 10.4   1715:36 glusterfsd

Here is my server config:

root at lmdc3gfs02:~# cat /etc/glusterfs/glusterfs-server.vol
volume posix
   type storage/posix
   option directory /data/export
end-volume

volume locks
   type features/locks
   subvolumes posix
end-volume

volume brick
   type performance/io-threads
   option thread-count 8
   subvolumes locks
end-volume

volume posix-ns
   type storage/posix
   option directory /data/export-ns
end-volume

volume locks-ns
   type features/locks
   subvolumes posix-ns
end-volume

volume brick-ns
   type performance/io-threads
   option thread-count 8
   subvolumes locks-ns
end-volume

volume server
   type protocol/server
   option transport-type tcp
   option auth.addr.brick.allow *
   option auth.addr.brick-ns.allow *
   subvolumes brick brick-ns
end-volume

Client Config:

root at lmdc3gfs02:~# cat /etc/glusterfs/glusterfs-client.vol
volume brick1
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.179.122.66   # IP address of the remote brick
  option remote-subvolume brick        # name of the remote volume
  option ping-timeout 2
end-volume

volume brick2
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.179.122.69      # IP address of the remote brick
  option remote-subvolume brick        # name of the remote volume
  option ping-timeout 2
end-volume

volume brick1-ns
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.179.122.66    # IP address of the remote brick
  option remote-subvolume brick-ns        # name of the remote volume
  option ping-timeout 2
end-volume

volume brick2-ns
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.179.122.69      # IP address of the remote brick
  option remote-subvolume brick-ns        # name of the remote volume
  option ping-timeout 2
end-volume

volume afr1
  type cluster/afr
  subvolumes brick1 brick2
end-volume

volume afr-ns
  type cluster/afr
  subvolumes brick1-ns brick2-ns
end-volume


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20100826/44dd0db9/attachment-0003.html>


More information about the Gluster-devel mailing list