[Gluster-users] Problem with add-brick

Dennis Michael dennis.michael at gmail.com
Mon Sep 26 23:55:35 UTC 2016


I am trying to add a fourth server to my distributed gluster setup, but the
'add-brick' command keeps failing.  I've tried several times, each time
cleaning the new server by stopping and uninstalling gluster, unmounting
and mkfs the new filesystem, deleting all gluster files
(/var/log/glusterfs, /var/lib/glusterfs), then re-installing.  On fs1, I
remove-brick the new server and detach the peer, and then start over.  It
keeps failing at the same point.

The servers have identical hardware and software.

What should I look for?

CentOS 7.2
Gluster 3.7.14-1

Server names are fs1, fs2, fs3 and the new server fs4.  Fs1, fs2 and fs3
have been running for several months.

[root at fs1]# gluster volume add-brick cees-data fs4:/data/brick
volume add-brick: failed: Commit failed on fs4. Please check log file for
details.

[root at fs1]# gluster volume info
Volume Name: cees-data
Type: Distribute
Volume ID: 27d2a59c-bdac-4f66-bcd8-e6124e53a4a2
Status: Started
Number of Bricks: 4
Transport-type: tcp,rdma
Bricks:
Brick1: fs1:/data/brick
Brick2: fs2:/data/brick
Brick3: fs3:/data/brick
Brick4: fs4:/data/brick
Options Reconfigured:
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on

[root at fs1]# gluster volume status
Status of volume: cees-data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick fs1:/data/brick                       49152     49153      Y
1878
Brick fs2:/data/brick                       49152     0          Y
1707
Brick fs3:/data/brick                       49152     0          Y
4696
NFS Server on fs4                           2049      0          Y
12190
NFS Server on localhost                     2049      0          Y
4838
Quota Daemon on localhost                   N/A       N/A        Y
4846
Quota Daemon on fs4                         N/A       N/A        Y
12198
NFS Server on fs3                           2049      0          Y
11084
Quota Daemon on fs3                         N/A       N/A        Y
11092
NFS Server on fs2                           2049      0          Y
10199
Quota Daemon on fs2                         N/A       N/A        Y
10207

Task Status of Volume cees-data
------------------------------------------------------------------------------
There are no active volume tasks


from the logs on the new server fs4:

[2016-09-26 22:44:38.605539] I [run.c:190:runner_log]
(-->/usr/lib64/glusterfs/3.7.14/xlator/mgmt/glusterd.so(glusterd_op_commit_hook+0x195)
[0x7f257bed20e5]
-->/usr/lib64/glusterfs/3.7.14/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4c5)
[0x7f257bf66e95] -->/lib64/libglusterfs.so.0(runner_log+0x115)
[0x7f25873cecd5] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
--volname=cees-data --version=1 --volume-op=add-brick
--gd-workdir=/var/lib/glusterd
[2016-09-26 22:44:39.254422] I [MSGID: 106143]
[glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/brick
on port 49152
[2016-09-26 22:44:39.254510] I [MSGID: 106143]
[glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick
/data/brick.rdma on port 49153
[2016-09-26 22:44:39.254921] E [MSGID: 106005]
[glusterd-utils.c:4771:glusterd_brick_start] 0-management: Unable to start
brick fs4:/data/brick
[2016-09-26 22:44:39.254949] E [MSGID: 106074]
[glusterd-brick-ops.c:2372:glusterd_op_add_brick] 0-glusterd: Unable to add
bricks
[2016-09-26 22:44:39.254958] E [MSGID: 106123]
[glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit
failed.
[2016-09-26 22:44:39.254965] E [MSGID: 106123]
[glusterd-mgmt-handler.c:603:glusterd_handle_commit_fn] 0-management:
commit failed on operation Add brick
[2016-09-26 22:45:38.146318] I [MSGID: 106144]
[glusterd-pmap.c:276:pmap_registry_remove] 0-pmap: removing brick
/data/brick on port 49152
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160926/9d04f8eb/attachment.html>


More information about the Gluster-users mailing list