[Gluster-users] Question about replace brick in glusterFS 3.5

Pranith Kumar Karampuri pkarampu at redhat.com
Fri Jul 25 08:58:10 UTC 2014


hi,
Steps for replace-brick:
0) Make sure the new brick is empty before replace-brick.
1) Check that all the bricks are running. It is okay if the brick that 
is going to be replaced is down.
2) Bring the brick that is going to be replaced down if not already.
     - Get the pid of the brick by executing 'gluster volume <volname> 
status'

12:37:49 ⚡ gluster volume status
Status of volume: r2
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------ 

Brick pranithk-laptop:/home/gfs/r2_0            49152    Y    5342 <<--- 
this is the brick we want to replace lets say.
Brick pranithk-laptop:/home/gfs/r2_1            49153    Y    5354
Brick pranithk-laptop:/home/gfs/r2_2            49154    Y    5365
Brick pranithk-laptop:/home/gfs/r2_3            49155    Y    5376
....

     - Login to the machine where the brick is running and kill the brick.

root at pranithk-laptop - /mnt/r2
12:38:33 ⚡ kill -9 5342

     - Confirm that the brick is not running anymore and the other 
bricks are running fine.

12:38:38 ⚡ gluster volume status
Status of volume: r2
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------ 

Brick pranithk-laptop:/home/gfs/r2_0            N/A    N    5342 <<---- 
brick is not running, others are running fine.
Brick pranithk-laptop:/home/gfs/r2_1            49153    Y    5354
Brick pranithk-laptop:/home/gfs/r2_2            49154    Y    5365
Brick pranithk-laptop:/home/gfs/r2_3            49155    Y    5376
....

3) Set up metadata so that heal will happen from the other brick in 
replica pair to the one that is going to be replaced (In this case it is 
from /home/gfs/r2_1 -> /home/gfs/r2_5):
      - Create a directory on the mount point that doesn't already 
exist. Then delete that directory, do the same for metadata changelog by 
doing setfattr. This operation marks the pending changelog which will 
tell self-heal damon/mounts to perform self-heal from /home/gfs/r2_1 to 
/home/gfs/r2_5.

        mkdir /mnt/r2/<name-of-nonexistent-dir>
        rmdir /mnt/r2/<name-of-nonexistent-dir>
        setfattr -n trusted.non-existent-key -v abc /mnt/r2
        setfattr -x trusted.non-existent-key  /mnt/r2
        NOTE: '/mnt/r2' is the mount path.

     - Check that there are pending xattrs:

getfattr -d -m. -e hex /home/gfs/r2_1
# file: home/gfs/r2_1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 

trusted.afr.r2-client-0=0x000000000000000300000002 <<---- xattrs are 
marked from source brick pranithk-laptop:/home/gfs/r2_1
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440

3) Replace the brick with 'commit force' option
- Execute replace-brick command

root at pranithk-laptop - /mnt/r2
12:58:46 ⚡ gluster volume replace-brick r2 `hostname`:/home/gfs/r2_0 
`hostname`:/home/gfs/r2_5 commit force
volume replace-brick: success: replace-brick commit successful

- Check that the new brick is now online
root at pranithk-laptop - /mnt/r2
12:59:21 ⚡ gluster volume status
Status of volume: r2
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------ 

Brick pranithk-laptop:/home/gfs/r2_5            49156    Y    5731 
<<<---- new brick is online
Brick pranithk-laptop:/home/gfs/r2_1            49153    Y    5354
Brick pranithk-laptop:/home/gfs/r2_2            49154    Y    5365
Brick pranithk-laptop:/home/gfs/r2_3            49155    Y    5376
...

- Once self-heal completes the changelogs will be removed.

12:59:27 ⚡ getfattr -d -m. -e hex /home/gfs/r2_1
getfattr: Removing leading '/' from absolute path names
# file: home/gfs/r2_1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 

trusted.afr.r2-client-0=0x000000000000000000000000 <<---- Pending 
changelogs are cleared.
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440

Pranith

On 07/25/2014 09:52 AM, 可樂我 wrote:
> Hi everyone,
> I have a question about replace brick in glusterFS3.5
> if I want find a new brick in other node to replace old brick,
> can I use the "gluster vol XXX replace-brick" cmd to replace brick?
>
> I find the information about replace brick in internet
> it say:
>
> When we initially came up with specs of 'glusterd', we needed an 
> option to replace a dead brick, and few people even requested for 
> having an option to migrate the data from the brick, when we are 
> replacing it.
>
> The result of this is 'gluster volume replace-brick' CLI, and in the 
> releases till 3.3.0 this was the only way to 'migrate' data off a 
> removed brick properly.
>
> Now, with 3.3.0+ (ie, in upstream too), we have another *better* 
> approach (technically), which is achieved by below methods
>
>
> So I can use the replace brick directly command?
> have any method to replace brick in glusterFS without data lose?
> Any one can help me?
> Thanks a lot
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140725/9844c7ae/attachment.html>


More information about the Gluster-users mailing list