[Gluster-users] Expanding Volumes and Geo-replication

M S Vishwanath Bhat vbhat at redhat.com
Thu Sep 4 10:33:28 UTC 2014


On 04/09/14 15:31, Paul Mc Auley wrote:
> On 04/09/2014 09:24, M S Vishwanath Bhat wrote:
>> On 04/09/14 00:33, Vijaykumar Koppad wrote:
>>> On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat 
>>> <vbhat at redhat.com <mailto:vbhat at redhat.com>> wrote:
>>>
>>>     On 01/09/14 23:09, Paul Mc Auley wrote:
>>>
>>>
>>>         Is geo-replication from a replica 3 volume to a replica 2
>>>         volume possible?
>>>
>>>     Yes. geo-replication just needs two gluster volumes (master ->
>>>     slave). It doesn't matter what configuration master and slave
>>>     has. But slave should be big enough to have all the data in master.
>>>
>>>         Should I stop geo-replication before adding additional
>>>         bricks? (I assume yes)
>>>
>>>     There is no need to stop geo-rep while adding more bricks to the
>>>     volume.
>>>
>>>         Should I stop the volume(s) before adding additional bricks?
>>>         (Doesn't _seem_ to be the case)
>>>
>>>     No.
>>>
>>>         Should I rebalance the volume(s) after adding the bricks?
>>>
>>>     Yes. After add-brick, rebalance should be run.
>>>
>>>         Should I need to recreate the geo-replication to push-pem
>>>         subsequently, or can I do that out-of-band?
>>>         ...and if so should I have to add the passwordless SSH key
>>>         back in? (As opposed to the restricted secret.pem)
>>>         For that matter in the inital setup is it an expected
>>>         failure mode that the initial geo-replication create will
>>>         fail if the slave host's SSH key isn't known?
>>>
>>>     After the add-brick, the newly added node will not have any pem
>>>     files. So you need to do "geo-rep create push-pem force". This
>>>     will actually push the pem files to the newly added node as
>>>     well. And then you need to do "geo-rep start force" to start the
>>>     gsync processes in newly added node.
>>>
>>>     So the sequence of steps for you will be,
>>>
>>>     1. Add new nodes to both master and slave using gluster
>>>     add-brick command.
>>>
>>> After this, we need to run  "gluster system:: execute gsec_create " 
>>> on master node and then proceed with step 2.
>> Yeah. Missed it... Sorry :)
> Ah, I suspect that's the step I was missing.
>> The pem files needs to be generated for the newly added nodes before 
>> pushing them to slave. Above command does that.
>>>
>>>     2. Run geo-rep create push-pem force and start force.
>>>     3. Run rebalance.
>>>
>>>     Hope this works and hope it helps :)
>>>
> Thanks for that folks, additionally I had assumed I'd need to stop the 
> replication before rebalancing or redoing the replication. It appears 
> to be a bit more stable if I proceed as described.
> Also my read is that while replication between the master and slave is 
> between individual nodes which happen to host single bricks, that 
> master doesn't deal with the slave's bricks.
No. Always the data is pushed to the slave volume via auxiliary mount of 
slave volume. Nothing is pushed to the slave bricks directly.

Best Regards,
Vishwanath

> Paul

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140904/f5fdd480/attachment.html>


More information about the Gluster-users mailing list