[Gluster-devel] new brick Status shows "Transport endpoint is not connected" in heal info after replace-brick on V3.7.11

柯名峻 fmpnate at gmail.com
Wed Jun 29 03:36:13 UTC 2016


Hi,

We have a disperse volume with 3 bricks.

After replacing a brick successfully, new brick Status shows "Transport
endpoint is not connected" in "heal info" command.

And I/O seems not to dispatch to that new brick.

By restarting the glusterd service, new bricks shows good status in "heal
info" command.

I/O will be dispatch to that new brick.

The issue can be duplicated, but not often.



Thanks.

-----------------------------------------------------------------------------------------------
After replace new brick "VM31:/export/lvol_vGBZvvcLmO/fs"
-----------------------------------------------------------------------------------------------

# gluster pool list
UUID                                    Hostname        State
ce5cb8d8-ac0c-42bf-9436-7add4438ee2a    VM31            Connected
d63e8967-194d-4bdc-ab22-dda0407582b5    VM32            Connected
6173c864-783c-493d-a133-28e99a45c6e8    VM33            Connected
5208fd22-8ac1-4656-a532-29bda734c898    localhost       Connected

# gluster volume status v1

Status of volume: v1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick VM31:/export/lvol_vGBZvvcLmO/fs   49170     0          Y       15891
Brick VM33:/export/bk1/fs                   49160     0          Y
12593
Brick VM30:/export/lvol_mbcwwy5fCq/fs   49189     0          Y       27193
Self-heal Daemon on localhost               N/A       N/A        Y
15899
Self-heal Daemon on VM33                    N/A       N/A        Y
15093
Self-heal Daemon on VM32                    N/A       N/A        Y       307
Self-heal Daemon on VM30                    N/A       N/A        Y
27907

Task Status of Volume v1
------------------------------------------------------------------------------
There are no active volume tasks

#gluster volume heal v1 info

Brick VM31:/export/lvol_vGBZvvcLmO/fs
Status: Transport endpoint is not connected
Number of entries: -

Brick VM33:/export/bk1/fs
/005
/001
/000
/004
Status: Connected
Number of entries: 4

Brick VM30:/export/lvol_mbcwwy5fCq/fs
/005
/001
/000
/004
Status: Connected
Number of entries: 4

# tail log of the new brick

132: volume v1-server
133:     type protocol/server
134:     option transport.socket.listen-port 49170
135:     option rpc-auth.auth-glusterfs on
136:     option rpc-auth.auth-unix on
137:     option rpc-auth.auth-null on
138:     option rpc-auth-allow-insecure on
139:     option transport-type tcp
140:     option auth.login./export/lvol_vGBZvvcLmO/fs.allow
a588bb4c-3303-4965-baaf-71f03f1b4987
141:     option auth.login.a588bb4c-3303-4965-baaf-71f03f1b4987.password
8f0aa3c2-9234-4203-8184-3e124cfb079d
142:     option auth.addr./export/lvol_vGBZvvcLmO/fs.allow *
143:     option manage-gids on
144:     subvolumes /export/lvol_vGBZvvcLmO/fs
145: end-volume
146:
+------------------------------------------------------------------------------+
[2016-06-27 20:17:13.535905] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2016-06-27 20:17:13.557358] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2016-06-27 20:17:13.560013] I [graph.c:269:gf_add_cmdline_options]
0-v1-server: adding option 'listen-port' for volume 'v1-s
erver' with value '49170'
[2016-06-27 20:17:13.560047] I [graph.c:269:gf_add_cmdline_options]
0-v1-posix: adding option 'glusterd-uuid' for volume 'v1-
posix' with value 'ce5cb8d8-ac0c-42bf-9436-7add4438ee2a'
[2016-06-27 20:17:13.560218] I [MSGID: 121037]
[changetimerecorder.c:1960:reconfigure] 0-v1-changetimerecorder: set
[2016-06-27 20:17:13.560382] I [MSGID: 0]
[gfdb_sqlite3.c:1356:gf_sqlite3_set_pragma] 0-sqlite3: Value set on DB
wal_autochec
kpoint : 1000
[2016-06-27 20:17:13.560959] I [MSGID: 0]
[gfdb_sqlite3.c:1356:gf_sqlite3_set_pragma] 0-sqlite3: Value set on DB
cache_size :
 1000
(End of file)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160629/1d766d09/attachment.html>


More information about the Gluster-devel mailing list