[Gluster-users] Possible memory leak ?

John Ewing johnewing1 at gmail.com
Thu Sep 12 09:21:23 UTC 2013


I see that 3.3.2 was released not long after we did our install and there
is a memory leak fix in the changelog.
We are running a replicated volume with a pair of nodes (not
geo-replication). What would be the correct procedure to update them?, I
can only find instructions for upgrading from 3.2 to 3.3 not for point
releases.




On Wed, Sep 11, 2013 at 5:03 PM, Lukáš Bezdička <lukas.bezdicka at gooddata.com
> wrote:

> I'm aware of 2 different kinds of memory leaks on 3.3.1, one is in
> geo-replication and another one is native client side memory leak.
> Sadly both got mixed in https://bugzilla.redhat.com/show_bug.cgi?id=841617
>
> I can tell you that geo-replication leak is still present in 3.4.0 and
> native client leak isn't but I don't know what patch you need to backport :(
>
>
> On Wed, Sep 11, 2013 at 1:16 PM, John Ewing <johnewing1 at gmail.com> wrote:
>
>> Hi,
>>
>> I am using gluster 3.3.1 on Centos 6, installed from
>> the glusterfs-3.3.1-1.el6.x86_64.rpm rpms.
>> I am seeing the Committed_AS memory continually increasing and the
>> processes  using the memory are glusterfsd instances.
>>
>> see http://imgur.com/K3dalTW for graph.
>>
>> Both nodes are exhibiting the same behaviour, I have tried the suggested
>>
>> echo 2 > /proc/sys/vm/drop_caches
>>
>> but it made no difference. It there a known issue with 3.3.1 ?
>>
>> Thanks
>>
>> John
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130912/8fc8c773/attachment.html>


More information about the Gluster-users mailing list