[Gluster-users] replacing one brick in a distribute replicate volume in gluster 3.8

Joseph Lorenzini jaloren at gmail.com
Sun Feb 5 22:01:57 UTC 2017

Okay, soooo a specific permutation of the replace-brick command does seem
to do what i was looking for. In this case server1 is being replaced by
server2. The command executes successfully and I then see the data on
server 2.

gluster volume replace-brick gv0 server1:/data/glusterfs/gv0/brick1/brick
server2:/data/glusterfs/gv0/brick1/brick commit force​

However, the reason I had initially stopped trying to use replace-brick is
due to this post in the list server saying that the command is going to be
deprecated and that there was a better way. So what are peoples opinions?
Is replace-brick the right way to do this or should be handled differently?



On Sun, Feb 5, 2017 at 3:40 PM, Joseph Lorenzini <jaloren at gmail.com> wrote:

> All:
> I am quite new to gluster so this is likely my lack of knowledge. Here's
> my scenario: I have a distribute replicate volume with a replica count of
> 3. Each brick is on a different server and the total number of bricks in
> the volume is 3.
> Now lets say one server goes bad or down. Now i want to bring up a new
> server with a single brick on that, add that into my volume, and then
> replicate a copy of all the files from the one of the existing bricks to
> the new one.
> What's the procedure for doing that?
> At first I thought, I would just add a new brick in and then remove the
> other brick. However, that didn't seem to work. The add brick command
> wanted a new set of bricks that match the volume replica count (so in this
> case three new bricks). Moreover this was a different replica set, which of
> course defeats the purpose of what I am trying to do.
> Thanks,
> Joe
> PS I did try using replace-brick but discovered in the list serve this was
> considered a deprecated command as far back as 2012.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170205/c9e0408f/attachment.html>

More information about the Gluster-users mailing list