[Gluster-users] Fwd: Moving brick of replica volume to new mount on filesystem.

shwetha spandura at redhat.com
Wed Aug 27 07:52:08 UTC 2014


For removing the bricks from the replica , we can just execute the 
command "replace-brick" with "commit force" option

Following is the procedure to replace the brick in the replicated volume.

##Replacing brick in Replicate/Distributed Replicate volumes

This section of the document contains how brick: 
`pranithk-laptop:/home/gfs/r2_0` is replaced with brick: 
`pranithk-laptop:/home/gfs/r2_5` in volume `r2` with replica count `2`.

Steps:
0. Make sure there is no data in the new brick 
pranithk-laptop:/home/gfs/r2_5
1. Check that all the bricks are running. It is okay if the brick that 
is going to be replaced is down.
2. Bring the brick that is going to be replaced down if not already.

   1. Get the pid of the brick by executing 'gluster volume <volname> 
status'

     ```
     12:37:49 ? gluster volume status
     Status of volume: r2
     Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
     Brick pranithk-laptop:/home/gfs/r2_0            49152    Y 5342
     Brick pranithk-laptop:/home/gfs/r2_1            49153    Y 5354
     Brick pranithk-laptop:/home/gfs/r2_2            49154    Y 5365
     Brick pranithk-laptop:/home/gfs/r2_3            49155    Y 5376
     ```

   2. Login to the machine where the brick is running and kill the brick.

     ```
     12:38:33 ? kill -9 5342
     ```

   3. Confirm that the brick is not running anymore and the other bricks 
are running fine.

     ```
     12:38:38 ? gluster volume status
     Status of volume: r2
     Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
     Brick pranithk-laptop:/home/gfs/r2_0            N/A    N 5342 
<<---- brick     is not running, others are running fine.
     Brick pranithk-laptop:/home/gfs/r2_1            49153    Y 5354
     Brick pranithk-laptop:/home/gfs/r2_2            49154    Y 5365
     Brick pranithk-laptop:/home/gfs/r2_3            49155    Y 5376
     ```

3. Using the gluster volume fuse mount (In this example: `/mnt/r2`) set 
up metadata so that data will be synced to new brick (In this case it is 
from `pranithk-laptop:/home/gfs/r2_1` to `pranithk-laptop:/home/gfs/r2_5`)
   1. Create a directory on the mount point that doesn't already exist. 
Then delete that directory, do the same for metadata changelog by doing 
setfattr. This operation marks the pending changelog which will tell 
self-heal damon/mounts to perform self-heal from /home/gfs/r2_1 to 
/home/gfs/r2_5.

     ```
     mkdir /mnt/r2/<name-of-nonexistent-dir>
     rmdir /mnt/r2/<name-of-nonexistent-dir>
     setfattr -n trusted.non-existent-key -v abc /mnt/r2
     setfattr -x trusted.non-existent-key  /mnt/r2
     ```

   2. Check that there are pending xattrs:

     ```
     getfattr -d -m. -e hex /home/gfs/r2_1
     # file: home/gfs/r2_1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
     trusted.afr.r2-client-0=0x000000000000000300000002 <<---- xattrs 
are marked     from source brick pranithk-laptop:/home/gfs/r2_1
     trusted.afr.r2-client-1=0x000000000000000000000000
     trusted.gfid=0x00000000000000000000000000000001
     trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
     trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
     ```

4. Replace the brick with 'commit force' option. Please note that other 
variants of replace-brick command are not supported.

   1. Execute replace-brick command

     ```
     12:58:46 ? gluster volume replace-brick r2 
`hostname`:/home/gfs/r2_0 `hostname`:/home/gfs/r2_5 commit force
     volume replace-brick: success: replace-brick commit successful
     ```

   2. Check that the new brick is now online

     ```
     12:59:21 ? gluster volume status
     Status of volume: r2
     Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
     Brick pranithk-laptop:/home/gfs/r2_5            49156    Y 5731 
<<<---- new     brick is online
     Brick pranithk-laptop:/home/gfs/r2_1            49153    Y 5354
     Brick pranithk-laptop:/home/gfs/r2_2            49154    Y 5365
     Brick pranithk-laptop:/home/gfs/r2_3            49155    Y 5376
     ```

   3. Once self-heal completes the changelogs will be removed.

     ```
     12:59:27 ? getfattr -d -m. -e hex /home/gfs/r2_1
     getfattr: Removing leading '/' from absolute path names
     # file: home/gfs/r2_1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
     trusted.afr.r2-client-0=0x000000000000000000000000 <<---- Pending 
changelogs are cleared.
     trusted.afr.r2-client-1=0x000000000000000000000000
     trusted.gfid=0x00000000000000000000000000000001
     trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
     trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
     ```


On 08/27/2014 02:59 AM, Joseph Jozwik wrote:
> To add to this it appears that replace brick is in a broken state.  I 
> can't abort it, or commit it. And I can run any other commands until 
> it thinks the replace-brick is complete.
>
> Is there a way to manually remove the task since it failed?
>
>
> root at pixel-glusterfs1:/# gluster volume status gdata2tb
> Status of volume: gdata2tb
> Gluster process Port    Online  Pid
> ------------------------------------------------------------------------------
> Brick 10.0.1.31:/mnt/data2tb/gbrick3  49157   Y       14783
> Brick 10.0.1.152:/mnt/raid10/gbrick3  49158   Y       2622
> Brick 10.0.1.153:/mnt/raid10/gbrick3  49153   Y       3034
> NFS Server on localhost 2049    Y       14790
> Self-heal Daemon on localhost N/A     Y       14794
> NFS Server on 10.0.0.205  N/A     N       N/A
> Self-heal Daemon on 10.0.0.205  N/A     Y       10323
> NFS Server on 10.0.1.153  2049    Y       12735
> Self-heal Daemon on 10.0.1.153  N/A     Y       12742
> NFS Server on 10.0.1.152  2049    Y       2629
> Self-heal Daemon on 10.0.1.152  N/A     Y       2636
>
>            Task                                      ID         Status
>            ----                                      --         ------
>   Replace brick    1dace9f0-ba98-4db9-9124-c962e74cce07      completed
>
>
> ---------- Forwarded message ----------
> From: *Joseph Jozwik* <jjozwik at printsites.com 
> <mailto:jjozwik at printsites.com>>
> Date: Tue, Aug 26, 2014 at 3:42 PM
> Subject: Moving brick of replica volume to new mount on filesystem.
> To: gluster-users at gluster.org <mailto:gluster-users at gluster.org>
>
>
>
> Hello,
>
> I need to move a brick to another location on the filesystem.
> My initial plan was to stop the gluster server with
> 1. service glusterfs-server stop
> 2. rsync -ap brick3 folder to new volume on server
> 3. umount old volume and bind mount the new to the same location.
>
> However I stopped the glusterfs-server on the node and there was still 
> background processes running glusterd. So I was not sure how to safely 
> stop them.
>
>
> I also attempted to replace-brick to a new location on the server but 
> that did not work with "volume replace-brick: failed: Commit failed on 
> localhost. Please check the log file for more details."
>
> Then attempted remove brick with
>
> "volume remove-brick gdata2tb replica 2 10.0.1.31:/mnt/data2tb/gbrick3 
> start"
> gluster> volume remove-brick gdata2tb 10.0.1.31:/mnt/data2tb/gbrick3 
> status
> volume remove-brick: failed: Volume gdata2tb is not a distribute 
> volume or contains only 1 brick.
> Not performing rebalance
> gluster>
>
>
>
> Volume Name: gdata2tb
> Type: Replicate
> Volume ID: 6cbcb2fc-9fd7-467e-9561-bff1937e8492
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.1.31:/mnt/data2tb/gbrick3
> Brick2: 10.0.1.152:/mnt/raid10/gbrick3
> Brick3: 10.0.1.153:/mnt/raid10/gbrick3
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140827/447d45d6/attachment.html>


More information about the Gluster-users mailing list