[Gluster-users] Expanding Volumes and Geo-replication
M S Vishwanath Bhat
vbhat at redhat.com
Thu Sep 4 08:24:58 UTC 2014
On 04/09/14 00:33, Vijaykumar Koppad wrote:
>
>
>
> On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat <vbhat at redhat.com
> <mailto:vbhat at redhat.com>> wrote:
>
> On 01/09/14 23:09, Paul Mc Auley wrote:
>
> Hi Folks,
>
> Bit of a query on the process for setting this up and the best
> practices for same.
>
> I'm currently working with a prototype using 3.5.2 on vagrant
> and I'm running into assorted failure modes with each pass.
>
> The general idea is I start with two sites A and B where A has
> 3 bricks used to build a volume vol at replica 3 and
> B has 2 bricks with at replica 2 to also build vol.
> I create 30 files is A::vol and then set up geo-replication
> from A to B after which I verify that the files have appeared
> in B::vol.
> What I want to do then is double the size of volumes
> (presumably growing one and not the other is a bad thing)
> by adding 3 bricks to A and 2 bricks to B.
>
> I've had this fail number of ways and so I have a number of
> questions.
>
> Is geo-replication from a replica 3 volume to a replica 2
> volume possible?
>
> Yes. geo-replication just needs two gluster volumes (master ->
> slave). It doesn't matter what configuration master and slave has.
> But slave should be big enough to have all the data in master.
>
> Should I stop geo-replication before adding additional bricks?
> (I assume yes)
>
> There is no need to stop geo-rep while adding more bricks to the
> volume.
>
> Should I stop the volume(s) before adding additional bricks?
> (Doesn't _seem_ to be the case)
>
> No.
>
> Should I rebalance the volume(s) after adding the bricks?
>
> Yes. After add-brick, rebalance should be run.
>
> Should I need to recreate the geo-replication to push-pem
> subsequently, or can I do that out-of-band?
> ...and if so should I have to add the passwordless SSH key
> back in? (As opposed to the restricted secret.pem)
> For that matter in the inital setup is it an expected failure
> mode that the initial geo-replication create will fail if the
> slave host's SSH key isn't known?
>
> After the add-brick, the newly added node will not have any pem
> files. So you need to do "geo-rep create push-pem force". This
> will actually push the pem files to the newly added node as well.
> And then you need to do "geo-rep start force" to start the gsync
> processes in newly added node.
>
> So the sequence of steps for you will be,
>
> 1. Add new nodes to both master and slave using gluster add-brick
> command.
>
> After this, we need to run "gluster system:: execute gsec_create " on
> master node and then proceed with step 2.
Yeah. Missed it... Sorry :)
The pem files needs to be generated for the newly added nodes before
pushing them to slave. Above command does that.
>
> 2. Run geo-rep create push-pem force and start force.
> 3. Run rebalance.
>
> Hope this works and hope it helps :)
>
>
> Best Regards,
> Vishwanath
>
>
>
> Thanks,
> Paul
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140904/73c66628/attachment.html>
More information about the Gluster-users
mailing list