[Gluster-users] Quickread Translator Memory Leak?

Benjamin Long benjamin.long at longbros.com
Thu Mar 18 19:30:50 UTC 2010


On Thursday 18 March 2010 03:03:03 pm Vijay Bellur wrote:
> Benjamin Long wrote:
> > Has anyone else noticed a memory leak when using the Quickread
> > translator?
> 
> Quickread translator does unlimited caching as of now. This is not a
> memory leak but it has the same effect in exhausting available memory.
> We are going to improve this behavior through enhancement bug 723.
> 
> > My workstations are having a problem as well. After running for a few
> > days (as long as a week) the users start having their sessions killed.
> > They are returned to a login prompt, and can login again. Glusterfs is
> > still running at this point, but I think thats because all the users apps
> > were first on the kill list for an oom condition. The backup server runs
> > nothing but glusterfs and rsync.
> 
> Do you have details of GlusterFS's memory usage (Resident Memory and
> percentage of memory used) at the instant when the oom condition was
> observed?
> 
> 
> Regards,
> Vijay
> 

Yep. It's a VM with 1GB of ram. It runs nothing but gluster, rsync, and ssh. I 
saw glusterfs using 97% of the ram just before it died. All the swap was used 
up too.

Here's the output of top about 10 min before that:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                                                                                           
 2868 root      20   0 55664 2356  648 S   14  0.2   0:23.28 rsync                                                                                                                                                                                                             
 2239 root      20   0  933m 752m 1300 R    5 74.9   0:12.06 glusterfs

I can turn quickread back on and test some more if it will be helpful.

-- 
Benjamin Long



More information about the Gluster-users mailing list