[Gluster-users] Removing brick from distributed volume

Kyle Johnson kjohnson at gnulnx.net
Tue Jan 3 20:52:38 UTC 2017


Hi everyone.


I have a distributed volume with two nodes, each with one brick,
and I need to move all of the data from one server to the other, so as
to bring the first server out of rotation for maintenance (os
reinstall).  Gluster 3.7.17 on both servers.

[root at colossus ~]# gluster volume status ftp detail
Status of volume: ftp
------------------------------------------------------------------------------
Brick                : Brick 192.168.110.1:/tank/bricks/ftp
TCP Port             : 49159
RDMA Port            : 0
Online               : Y
Pid                  : 45453
File System          : N/A
Device               : N/A
Mount Options        : N/A
Inode Size           : N/A
Disk Space Free      : 11.8PB
Total Disk Space     : 22.9PB
Inode Count          : 101678383105
Free Inodes          : 101666527733
------------------------------------------------------------------------------
Brick                : Brick 192.168.110.2:/ftp/bricks/ftp
TCP Port             : 49152
RDMA Port            : 0
Online               : Y
Pid                  : 18079
File System          : zfs
Device               : storage/bricks
Mount Options        : rw,noatime
Inode Size           : N/A
Disk Space Free      : 19.6TB
Total Disk Space     : 71.8TB
Inode Count          : 42172798229
Free Inodes          : 42169081889


Due to https://bugzilla.redhat.com/show_bug.cgi?id=1356076 and
https://bugzilla.redhat.com/show_bug.cgi?id=1373618, I don't feel
confident in the rebalance process that would occur as part of the
`gluster volume remove-brick ... start` command to properly move files
between the bricks.

The bricks (on both servers) are zfs-backed, so I do have the ability
via `zfs send | ssh zfs receive` to easily replicate the to-be removed
brick, however with this route, I am concerned about the extended
attributes properly carrying over, as one server is CentOS (to be
removed), and the other is FreeBSD.

How should I proceed?


More information about the Gluster-users mailing list