[Gluster-users] Increasing replica count from 2 to 3

Jackie Tung jackie at drive.ai
Fri Dec 30 02:32:22 UTC 2016


Thank you.

It was glusterd and the bricks.

I sent TERM to glusterd, then restarted the systemd service.  This actually
caused all bricks to be restarted as well.  Is this a right way to do this?

On Dec 29, 2016 3:01 PM, "Joe Julian" <joe at julianfamily.org> wrote:

> Which application is filling memory?
>
> If it's a brick (glusterfsd) then stopping and starting a brick ("kill"
> and "gluster volume start ... force") will not waste cycles re-healing any
> files that are healthy. Any heals of an individual file that were not
> complete will be restarted as well as any files that were changed while the
> brick was offline.
> If it's glusterd, that can be restarted at any time without interfering
> with the volume.
>
> If it's glustershd ("/usr/bin/glusterfs -s localhost --volfile-id
> gluster/glustershd ..."), you can restart that with "gluster volume start
> ... force" (even if the volume is already started).
>
>
> On 12/29/2016 02:27 PM, Jackie Tung wrote:
>
> Ravi,
>
> Got it thanks.  I’ve kicked this off, it seems be doing OK.
>
> I am a little concerned about a slow creep of memory usage:
>
> * swap (64GB) completed filled up on server_1
> * general memory usage creeping up slowly over time.
>
> $ free -m
>               total        used        free      shared  buff/cache
> available
> Mem:         128829       55596         614          53       72618
> 71783
> Swap:         61034       61034           0
>
> Similar issue on server_2, though lower starting memory usage:
>
> The “available” number is slowly going down - at this rate, probably will
> go to 0 before heal is done.
>
> We are actually running 3.8.6, I’d like to try to pause the heal, upgrade
> to 3.8.7, and resume.  Is this possible heal suspend/resume possible or
> advisable?
>
> The upgrade idea came from this on bugzilla (not 100% if it will help my
> leak):
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1400927
>
> Even without doing the upgrade, I may need to restart glusterfs-server
> anyway to reset memory usage.
>
> Thanks,
> Jackie
>
> On Dec 28, 2016, at 9:40 PM, Ravishankar N <ravishankar at redhat.com> wrote:
>
> On 12/29/2016 10:46 AM, Jackie Tung wrote:
>
> Thanks very much for the advice.
>
> Would you mind elaborating on the "no io" recommendation?  It's somewhat
> hard for me to guarantee this without a long maintenance window.
>
> What is the consequence of having IO at point of add-brick, and for the
> heal period afterwards?
>
>
> Sorry I wasn't clear. Since you're running  16 distribute legs (16x2), a
> lot of self-heals would be running and there is a chance that clients might
> experience slowness due to the self-heals. Other than that it should be
> fine.
> Thanks,
> Ravi
>
>
>
>
> On Dec 28, 2016 8:27 PM, "Ravishankar N" <ravishankar at redhat.com> wrote:
>
> On 12/29/2016 07:30 AM, Jackie Tung wrote:
>
> Version is 3.8.7 on Ubuntu xenial.
>
> On Dec 28, 2016 5:56 PM, "Jackie Tung" <jackie at drive.ai> wrote:
>
>> If someone has experience to share in this area, i'd be grateful.  I have
>> an existing distributed replicated volume, 2x16.
>>
>> We have a third server ready to go.  Redhat docs say just run add brick
>> replica 3, then run rebalance.
>>
>> The rebalance step feels a bit off to me.  Isn't some kind of heal
>> operation in order rather than rebalance?
>>
>> No additional usable space will be introduced, only replica count
>> increase from 2 to 3.
>>
>
> You don't need to run re-balance for increasing the replica count. Heals
> should automatically be triggered when you run 'gluster vol add-brick
> <volname> replica 3 <list of bricks for the 3rd replica>`. It is advisable
> to do this when there is no I/O happening  on the volume. You can verify
> that files are getting populated in the newly added bricks post running the
> command.
>
> -Ravi
>
>
>> Thanks
>> Jackie
>>
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>

-- 
 

The information in this email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this email 
by anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be 
taken in reliance on it, is prohibited and may be unlawful.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161229/59045920/attachment.html>


More information about the Gluster-users mailing list