[Gluster-users] gluster volume permissions denied

vincent gromakowski vincent.gromakowski at gmail.com
Thu Dec 29 17:19:29 UTC 2016


Hi all,
Any idea regarding the log outputs ?
What are the ACL to set on bricks directory or gluster brick root ?

2016-12-28 11:25 GMT+01:00 vincent gromakowski <
vincent.gromakowski at gmail.com>:

> Hi,
> Please find below the outputs. I previse that I can read and write to the
> volume but only with "sudo" or root account whatever the ACL or the
> ownership I set on fuse directories (even 777)
>
> *>sudo gluster peer status*
> *Number of Peers: 3*
>
> *Hostname: bd-reactive-worker-4*
> *Uuid: 434a7ee0-9c83-47ce-9a02-7c89e2e94ce0*
> *State: Peer in Cluster (Connected)*
>
> *Hostname: bd-reactive-worker-2*
> *Uuid: 7f76389c-3f78-4cac-8fd8-56f0a9bff47a*
> *State: Peer in Cluster (Connected)*
>
> *Hostname: bd-reactive-worker-3*
> *Uuid: e412cae9-6ecd-49cf-be63-c46d3e537c83*
> *State: Peer in Cluster (Connected)*
>
>
> *>sudo gluster volume status*
>
>
> *Status of volume: reactive_smallGluster process
>   TCP Port  RDMA Port  Online
>  Pid------------------------------------------------------------------------------Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick1
>      49155     0          Y       31517Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick1
>      49155     0          Y       1147Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick1
>      49155     0          Y       32455Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick1
>      49155     0          Y       675Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick2
>      49156     0          Y       31536Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick2
>      49156     0          Y       1167Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick2
>      49156     0          Y       32474Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick2
>      49156     0          Y       696Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick3
>      49157     0          Y       31555Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick3
>      49157     0          Y       1190Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick3
>      49157     0          Y       32493Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick3
>      49157     0          Y       715Self-heal Daemon on localhost
>       N/A       N/A        Y       31575Self-heal Daemon on
> bd-reactive-worker-4    N/A       N/A        Y       736Self-heal Daemon on
> bd-reactive-worker-3    N/A       N/A        Y       32518Self-heal Daemon
> on bd-reactive-worker-2    N/A       N/A        Y       1227Task Status of
> Volume
> reactive_small------------------------------------------------------------------------------There
> are no active volume tasks*
>
>
> 2016-12-28 11:11 GMT+01:00 knarra <knarra at redhat.com>:
>
>> On 12/28/2016 02:42 PM, vincent gromakowski wrote:
>>
>> Hi,
>> Can someone help me solve this issue ? I am really stuck on it and I
>> don't find any workaround...
>> Thanks a lot.
>>
>> V
>>
>> Hi,
>>
>>     What does gluster volume status show? I think it is because of quorum
>> you are not able to read / write to and from the volume. Can you check if
>> all your bricks are online and can you paste the output of your gluster
>> peer status? In the glusterd.log i see that "*Peer
>> <bd-reactive-worker-1> (<59500674-750f-4e16-aeea-4a99fd67218a>), in state
>> <Peer in Cluster>, has disconnected from glusterd." *
>>
>> Thanks
>> kasturi.
>>
>>
>> 2016-12-26 15:02 GMT+01:00 vincent gromakowski <
>> vincent.gromakowski at gmail.com>:
>>
>>> Hi all,
>>> I am currently setting a gluster volume on 4 Centos 7.2 nodes.
>>> Everything seems to be OK from the volume creation to the fuse mounting but
>>> after that I can't access data (read or write) without a sudo even if I set
>>> 777 permissions.
>>> I have checked that permissions on underlying FS (an XFS volume) are OK
>>> so I assume the problem is in Gluster but I can't find where.
>>> I am using ansible to deploy gluster, create volumes and mount fuse
>>> endpoint.
>>> Please find below some information:
>>>
>>> The line in /etc/fstab for mounting the raw device
>>>
>>> *LABEL=/gluster /srv/gluster/data xfs defaults 0 0 *
>>>
>>> The line in /etc/fstab for mounting the fuse endpoint
>>> *bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
>>> defaults,_netdev 0 0*
>>>
>>> *>sudo gluster volume info*
>>>
>>>
>>>
>>>
>>>
>>>
>>> * Volume Name: reactive_small Type: Distributed-Replicate Volume ID:
>>> f0abede2-eab3-4a0b-8271-ffd6f3c83eb6 Status: Started Snapshot Count: 0
>>> Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1:
>>> bd-reactive-worker-1:/srv/gluster/data/small/brick1 Brick2:
>>> bd-reactive-worker-2:/srv/gluster/data/small/brick1 Brick3:
>>> bd-reactive-worker-3:/srv/gluster/data/small/brick1 Brick4:
>>> bd-reactive-worker-4:/srv/gluster/data/small/brick1 Brick5:
>>> bd-reactive-worker-1:/srv/gluster/data/small/brick2 Brick6:
>>> bd-reactive-worker-2:/srv/gluster/data/small/brick2 Brick7:
>>> bd-reactive-worker-3:/srv/gluster/data/small/brick2 Brick8:
>>> bd-reactive-worker-4:/srv/gluster/data/small/brick2 Brick9:
>>> bd-reactive-worker-1:/srv/gluster/data/small/brick3 Brick10:
>>> bd-reactive-worker-2:/srv/gluster/data/small/brick3 Brick11:
>>> bd-reactive-worker-3:/srv/gluster/data/small/brick3 Brick12:
>>> bd-reactive-worker-4:/srv/gluster/data/small/brick3 Options Reconfigured:
>>> nfs.disable: on performance.readdir-ahead: on transport.address-family:
>>> inet cluster.data-self-heal: off cluster.entry-self-heal: off
>>> cluster.metadata-self-heal: off cluster.self-heal-daemon: off >sudo cat
>>> /var/log/glusterfs/cli.log [2016-12-26 13:41:11.422850] I [cli.c:730:main]
>>> 0-cli: Started running gluster with version 3.8.5 [2016-12-26
>>> 13:41:11.428970] I [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
>>> geo-replication not installed [2016-12-26 13:41:11.429308] I [MSGID:
>>> 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
>>> thread with index 1 [2016-12-26 13:41:11.429360] I
>>> [socket.c:2403:socket_event_handler] 0-transport: disconnecting now
>>> [2016-12-26 13:41:11.430285] I [socket.c:3391:socket_submit_request]
>>> 0-glusterfs: not connected (priv->connected = 0) [2016-12-26
>>> 13:41:11.430320] W [rpc-clnt.c:1640:rpc_clnt_submit] 0-glusterfs: failed to
>>> submit rpc-request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 5) to
>>> rpc-transport (glusterfs) [2016-12-26 13:41:24.967491] I [cli.c:730:main]
>>> 0-cli: Started running gluster with version 3.8.5 [2016-12-26
>>> 13:41:24.972755] I [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
>>> geo-replication not installed [2016-12-26 13:41:24.973014] I [MSGID:
>>> 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
>>> thread with index 1 [2016-12-26 13:41:24.973080] I
>>> [socket.c:2403:socket_event_handler] 0-transport: disconnecting now
>>> [2016-12-26 13:41:24.973552] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
>>> 0-cli: Received resp to get vol: 0 [2016-12-26 13:41:24.976419] I
>>> [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol:
>>> 0 [2016-12-26 13:41:24.976957] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
>>> 0-cli: Received resp to get vol: 0 [2016-12-26 13:41:24.976985] I
>>> [input.c:31:cli_batch] 0-: Exiting with: 0 >sudo cat
>>> /var/log/glusterfs/srv-data-small.log [2016-12-26 13:46:53.407541] W
>>> [socket.c:590:__socket_rwv] 0-glusterfs: readv on 172.52.0.4:24007
>>> <http://172.52.0.4:24007> failed (No data available) [2016-12-26
>>> 13:46:53.407997] E [glusterfsd-mgmt.c:1902:mgmt_rpc_notify]
>>> 0-glusterfsd-mgmt: failed to connect with remote-host: 172.52.0.4 (No data
>>> available) [2016-12-26 13:46:53.408079] I
>>> [glusterfsd-mgmt.c:1919:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all
>>> volfile servers [2016-12-26 13:46:54.736497] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-3: changing
>>> port to 49155 (from 0) [2016-12-26 13:46:54.738710] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-7: changing
>>> port to 49156 (from 0) [2016-12-26 13:46:54.738766] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-11: changing
>>> port to 49157 (from 0) [2016-12-26 13:46:54.742911] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-3: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:54.743199] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-7: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:54.743476] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-3:
>>> Connected to reactive_small-client-3, attached to remote volume
>>> '/srv/gluster/data/small/brick1'. [2016-12-26 13:46:54.743488] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-3: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:54.743603] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-7:
>>> Connected to reactive_small-client-7, attached to remote volume
>>> '/srv/gluster/data/small/brick2'. [2016-12-26 13:46:54.743614] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-7: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:54.743673] I [MSGID: 108002]
>>> [afr-common.c:4371:afr_notify] 0-reactive_small-replicate-2: Client-quorum
>>> is met [2016-12-26 13:46:54.743694] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-3: Server lk version = 1 [2016-12-26
>>> 13:46:54.743798] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-7: Server lk version = 1 [2016-12-26
>>> 13:46:54.745749] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-11: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:54.746211] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-11:
>>> Connected to reactive_small-client-11, attached to remote volume
>>> '/srv/gluster/data/small/brick3'. [2016-12-26 13:46:54.746226] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-11: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:54.746288] I [MSGID: 108002]
>>> [afr-common.c:4371:afr_notify] 0-reactive_small-replicate-3: Client-quorum
>>> is met [2016-12-26 13:46:54.746403] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-11: Server lk version = 1 [2016-12-26
>>> 13:46:54.765923] E [MSGID: 114058]
>>> [client-handshake.c:1533:client_query_portmap_cbk]
>>> 0-reactive_small-client-2: failed to get the port number for remote
>>> subvolume. Please run 'gluster volume status' on server to see if brick
>>> process is running. [2016-12-26 13:46:54.765951] E [MSGID: 114058]
>>> [client-handshake.c:1533:client_query_portmap_cbk]
>>> 0-reactive_small-client-10: failed to get the port number for remote
>>> subvolume. Please run 'gluster volume status' on server to see if brick
>>> process is running. [2016-12-26 13:46:54.765986] E [MSGID: 114058]
>>> [client-handshake.c:1533:client_query_portmap_cbk]
>>> 0-reactive_small-client-6: failed to get the port number for remote
>>> subvolume. Please run 'gluster volume status' on server to see if brick
>>> process is running. [2016-12-26 13:46:54.766001] I [MSGID: 114018]
>>> [client.c:2280:client_rpc_notify] 0-reactive_small-client-2: disconnected
>>> from reactive_small-client-2. Client process will keep trying to connect to
>>> glusterd until brick's port is available [2016-12-26 13:46:54.766013] I
>>> [MSGID: 114018] [client.c:2280:client_rpc_notify]
>>> 0-reactive_small-client-10: disconnected from reactive_small-client-10.
>>> Client process will keep trying to connect to glusterd until brick's port
>>> is available [2016-12-26 13:46:54.766032] I [MSGID: 114018]
>>> [client.c:2280:client_rpc_notify] 0-reactive_small-client-6: disconnected
>>> from reactive_small-client-6. Client process will keep trying to connect to
>>> glusterd until brick's port is available [2016-12-26 13:46:57.019722] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-2: changing
>>> port to 49155 (from 0) [2016-12-26 13:46:57.021611] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-6: changing
>>> port to 49156 (from 0) [2016-12-26 13:46:57.025630] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-2: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:57.026240] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-2:
>>> Connected to reactive_small-client-2, attached to remote volume
>>> '/srv/gluster/data/small/brick1'. [2016-12-26 13:46:57.026252] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-2: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:57.026312] I
>>> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-10: changing
>>> port to 49157 (from 0) [2016-12-26 13:46:57.027737] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-2: Server lk version = 1 [2016-12-26
>>> 13:46:57.029251] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-6: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:57.029781] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-6:
>>> Connected to reactive_small-client-6, attached to remote volume
>>> '/srv/gluster/data/small/brick2'. [2016-12-26 13:46:57.029798] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-6: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:57.030194] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-6: Server lk version = 1 [2016-12-26
>>> 13:46:57.031709] I [MSGID: 114057]
>>> [client-handshake.c:1446:select_server_supported_programs]
>>> 0-reactive_small-client-10: Using Program GlusterFS 3.3, Num (1298437),
>>> Version (330) [2016-12-26 13:46:57.032215] I [MSGID: 114046]
>>> [client-handshake.c:1222:client_setvolume_cbk] 0-reactive_small-client-10:
>>> Connected to reactive_small-client-10, attached to remote volume
>>> '/srv/gluster/data/small/brick3'. [2016-12-26 13:46:57.032224] I [MSGID:
>>> 114047] [client-handshake.c:1233:client_setvolume_cbk]
>>> 0-reactive_small-client-10: Server and Client lk-version numbers are not
>>> same, reopening the fds [2016-12-26 13:46:57.032475] I [MSGID: 114035]
>>> [client-handshake.c:201:client_set_lk_version_cbk]
>>> 0-reactive_small-client-10: Server lk version = 1 [2016-12-26
>>> 13:47:04.032294] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-glusterfs:
>>> No change in volfile, continuing [2016-12-26 13:59:01.935684] I [MSGID:
>>> 108031] [afr-common.c:2067:afr_local_discovery_cbk]
>>> 0-reactive_small-replicate-0: selecting local read_child
>>> reactive_small-client-1 [2016-12-26 13:59:01.937790] I [MSGID: 108031]
>>> [afr-common.c:2067:afr_local_discovery_cbk] 0-reactive_small-replicate-1:
>>> selecting local read_child reactive_small-client-5 [2016-12-26
>>> 13:59:01.938727] I [MSGID: 108031]
>>> [afr-common.c:2067:afr_local_discovery_cbk] 0-reactive_small-replicate-3:
>>> selecting local read_child reactive_small-client-9 [2016-12-26
>>> 13:59:09.566572] I [dict.c:462:dict_get]
>>> (-->/usr/lib64/glusterfs/3.8.5/xlator/debug/io-stats.so(+0x13628)
>>> [0x7fada9d4c628]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/system/posix-acl.so(+0x9d0b)
>>> [0x7fada9b30d0b] -->/lib64/libglusterfs.so.0(dict_get+0xec)
>>> [0x7fadb913933c] ) 0-dict: !this || key=system.posix_acl_access [Invalid
>>> argument] [2016-12-26 13:59:09.566730] I [dict.c:462:dict_get]
>>> (-->/usr/lib64/glusterfs/3.8.5/xlator/debug/io-stats.so(+0x13628)
>>> [0x7fada9d4c628]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/system/posix-acl.so(+0x9d61)
>>> [0x7fada9b30d61] -->/lib64/libglusterfs.so.0(dict_get+0xec)
>>> [0x7fadb913933c] ) 0-dict: !this || key=system.posix_acl_default [Invalid
>>> argument] >sudo cat /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
>>> [2016-12-26 13:46:37.511891] I [MSGID: 106487]
>>> [glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd:
>>> Received cli list req [2016-12-26 13:46:53.407000] W
>>> [socket.c:590:__socket_rwv] 0-management: readv on 172.52.0.4:24007
>>> <http://172.52.0.4:24007> failed (No data available) [2016-12-26
>>> 13:46:53.407171] I [MSGID: 106004]
>>> [glusterd-handler.c:5219:__glusterd_peer_rpc_notify] 0-management: Peer
>>> <bd-reactive-worker-1> (<59500674-750f-4e16-aeea-4a99fd67218a>), in state
>>> <Peer in Cluster>, has disconnected from glusterd. [2016-12-26
>>> 13:46:53.407532] W [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
>>> (-->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0x1de5c)
>>> [0x7f467cda0e5c]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0x27a08)
>>> [0x7f467cdaaa08]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0xd07fa)
>>> [0x7f467ce537fa] ) 0-management: Lock for vol reactive_large not held
>>> [2016-12-26 13:46:53.407575] W [MSGID: 106118]
>>> [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not
>>> released for reactive_large [2016-12-26 13:46:53.407694] W
>>> [glusterd-locks.c:675:glusterd_mgmt_v3_unlock]
>>> (-->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0x1de5c)
>>> [0x7f467cda0e5c]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0x27a08)
>>> [0x7f467cdaaa08]
>>> -->/usr/lib64/glusterfs/3.8.5/xlator/mgmt/glusterd.so(+0xd07fa)
>>> [0x7f467ce537fa] ) 0-management: Lock for vol reactive_small not held
>>> [2016-12-26 13:46:53.407723] W [MSGID: 106118]
>>> [glusterd-handler.c:5241:__glusterd_peer_rpc_notify] 0-management: Lock not
>>> released for reactive_small [2016-12-26 13:46:53.485185] I [MSGID: 106163]
>>> [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30800 [2016-12-26 13:46:53.489760] I
>>> [MSGID: 106490]
>>> [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: 59500674-750f-4e16-aeea-4a99fd67218a [2016-12-26
>>> 13:46:53.529568] W [glusterfsd.c:1327:cleanup_and_exit]
>>> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f4687483dc5]
>>> -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7f4688b17cd5]
>>> -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f4688b17b4b] ) 0-:
>>> received signum (15), shutting down [2016-12-26 13:46:53.562392] I [MSGID:
>>> 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterd: Started running
>>> /usr/sbin/glusterd version 3.8.5 (args: /usr/sbin/glusterd -p
>>> /var/run/glusterd.pid --log-level INFO) [2016-12-26 13:46:53.569917] I
>>> [MSGID: 106478] [glusterd.c:1379:init] 0-management: Maximum allowed open
>>> file descriptors set to 65536 [2016-12-26 13:46:53.569959] I [MSGID:
>>> 106479] [glusterd.c:1428:init] 0-management: Using /var/lib/glusterd as
>>> working directory [2016-12-26 13:46:53.575301] E
>>> [rpc-transport.c:287:rpc_transport_load] 0-rpc-transport:
>>> /usr/lib64/glusterfs/3.8.5/rpc-transport/rdma.so: cannot open shared object
>>> file: No such file or directory [2016-12-26 13:46:53.575327] W
>>> [rpc-transport.c:291:rpc_transport_load] 0-rpc-transport: volume
>>> 'rdma.management': transport-type 'rdma' is not valid or not found on this
>>> machine [2016-12-26 13:46:53.575335] W
>>> [rpcsvc.c:1638:rpcsvc_create_listener] 0-rpc-service: cannot create
>>> listener, initing the transport failed [2016-12-26 13:46:53.575341] E
>>> [MSGID: 106243] [glusterd.c:1652:init] 0-management: creation of 1
>>> listeners failed, continuing with succeeded transport [2016-12-26
>>> 13:46:53.576843] I [MSGID: 106228]
>>> [glusterd.c:429:glusterd_check_gsync_present] 0-glusterd: geo-replication
>>> module not installed in the system [No such file or directory] [2016-12-26
>>> 13:46:53.577209] I [MSGID: 106513]
>>> [glusterd-store.c:2098:glusterd_restore_op_version] 0-glusterd: retrieved
>>> op-version: 30800 [2016-12-26 13:46:53.720253] I [MSGID: 106498]
>>> [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management:
>>> connect returned 0 [2016-12-26 13:46:53.720477] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:53.723273] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:53.725591] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 The message "I [MSGID: 106498]
>>> [glusterd-handler.c:3649:glusterd_friend_add_from_peerinfo] 0-management:
>>> connect returned 0" repeated 2 times between [2016-12-26 13:46:53.720253]
>>> and [2016-12-26 13:46:53.720391] [2016-12-26 13:46:53.728948] I [MSGID:
>>> 106544] [glusterd.c:155:glusterd_uuid_init] 0-management: retrieved UUID:
>>> 2767e4e8-e203-4f77-8087-298c5a0f862f Final graph:
>>> +------------------------------------------------------------------------------+
>>>   1: volume management   2:     type mgmt/glusterd   3:     option
>>> rpc-auth.auth-glusterfs on   4:     option rpc-auth.auth-unix on   5:
>>> option rpc-auth.auth-null on   6:     option rpc-auth-allow-insecure on
>>> 7:     option transport.socket.listen-backlog 128   8:     option
>>> event-threads 1   9:     option ping-timeout 0  10:     option
>>> transport.socket.read-fail-log off  11:     option
>>> transport.socket.keepalive-interval 2  12:     option
>>> transport.socket.keepalive-time 10  13:     option transport-type rdma  14:
>>>     option working-directory /var/lib/glusterd  15: end-volume  16:
>>> +------------------------------------------------------------------------------+
>>> [2016-12-26 13:46:53.732358] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 1 [2016-12-26 13:46:53.739916] I [MSGID: 106163]
>>> [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30800 [2016-12-26 13:46:54.735745] I
>>> [MSGID: 106163]
>>> [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30800 [2016-12-26 13:46:54.743668] I
>>> [MSGID: 106490]
>>> [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: 854a4235-dff0-4ae8-8589-72aa6ce6a35f [2016-12-26
>>> 13:46:54.745380] I [MSGID: 106493]
>>> [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to bd-reactive-worker-4 (0), ret: 0, op_ret: 0 [2016-12-26
>>> 13:46:54.752307] I [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-nfs:
>>> setting frame-timeout to 600 [2016-12-26 13:46:54.752443] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already
>>> stopped [2016-12-26 13:46:54.752472] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is
>>> stopped [2016-12-26 13:46:54.752849] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-glustershd: setting
>>> frame-timeout to 600 [2016-12-26 13:46:54.753881] I [MSGID: 106568]
>>> [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping
>>> glustershd daemon running in pid: 17578 [2016-12-26 13:46:55.754166] I
>>> [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management:
>>> glustershd service is stopped [2016-12-26 13:46:55.754226] I [MSGID:
>>> 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting
>>> glustershd service [2016-12-26 13:46:55.765127] W
>>> [socket.c:3065:socket_connect] 0-glustershd: Ignore failed connection
>>> attempt on , (No such file or directory) [2016-12-26 13:46:55.765272] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-quotad: setting frame-timeout
>>> to 600 [2016-12-26 13:46:55.765511] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already
>>> stopped [2016-12-26 13:46:55.765583] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is
>>> stopped [2016-12-26 13:46:55.765680] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to
>>> 600 [2016-12-26 13:46:55.765876] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already
>>> stopped [2016-12-26 13:46:55.765922] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is
>>> stopped [2016-12-26 13:46:55.766041] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-scrub: setting frame-timeout
>>> to 600 [2016-12-26 13:46:55.766312] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already
>>> stopped [2016-12-26 13:46:55.766383] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is
>>> stopped [2016-12-26 13:46:55.766613] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.766878] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.767109] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.767252] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.767420] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.767670] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-management: setting
>>> frame-timeout to 600 [2016-12-26 13:46:55.767800] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-snapd: setting frame-timeout
>>> to 600 [2016-12-26 13:46:55.767916] I
>>> [rpc-clnt.c:1033:rpc_clnt_connection_init] 0-snapd: setting frame-timeout
>>> to 600 [2016-12-26 13:46:55.768115] I [MSGID: 106492]
>>> [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd:
>>> Received friend update from uuid: 854a4235-dff0-4ae8-8589-72aa6ce6a35f
>>> [2016-12-26 13:46:55.769849] I [MSGID: 106502]
>>> [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management:
>>> Received my uuid as Friend [2016-12-26 13:46:55.771341] I [MSGID: 106490]
>>> [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: 9885f122-6242-4ad8-96ee-3a8e25c2d98e [2016-12-26
>>> 13:46:55.772677] I [MSGID: 106493]
>>> [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to bd-reactive-worker-3 (0), ret: 0, op_ret: 0 [2016-12-26
>>> 13:46:55.775913] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already
>>> stopped [2016-12-26 13:46:55.775946] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is
>>> stopped [2016-12-26 13:46:55.777210] I [MSGID: 106568]
>>> [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping
>>> glustershd daemon running in pid: 17762 [2016-12-26 13:46:56.778124] I
>>> [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management:
>>> glustershd service is stopped [2016-12-26 13:46:56.778194] I [MSGID:
>>> 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting
>>> glustershd service [2016-12-26 13:46:56.781946] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already
>>> stopped [2016-12-26 13:46:56.781976] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is
>>> stopped [2016-12-26 13:46:56.782024] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already
>>> stopped [2016-12-26 13:46:56.782046] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is
>>> stopped [2016-12-26 13:46:56.782075] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already
>>> stopped [2016-12-26 13:46:56.782085] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is
>>> stopped [2016-12-26 13:46:56.785199] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
>>> from uuid: 854a4235-dff0-4ae8-8589-72aa6ce6a35f, host:
>>> bd-reactive-worker-4, port: 0 [2016-12-26 13:46:56.789916] I [MSGID:
>>> 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update]
>>> 0-glusterd: Received friend update from uuid:
>>> 854a4235-dff0-4ae8-8589-72aa6ce6a35f [2016-12-26 13:46:56.791664] I [MSGID:
>>> 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update]
>>> 0-management: Received my uuid as Friend [2016-12-26 13:46:56.795667] I
>>> [MSGID: 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update]
>>> 0-glusterd: Received friend update from uuid:
>>> 9885f122-6242-4ad8-96ee-3a8e25c2d98e [2016-12-26 13:46:56.801246] I [MSGID:
>>> 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update]
>>> 0-management: Received my uuid as Friend [2016-12-26 13:46:56.801309] I
>>> [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk]
>>> 0-management: Received ACC from uuid: 854a4235-dff0-4ae8-8589-72aa6ce6a35f
>>> [2016-12-26 13:46:56.801334] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
>>> from uuid: 9885f122-6242-4ad8-96ee-3a8e25c2d98e, host:
>>> bd-reactive-worker-3, port: 0 [2016-12-26 13:46:56.802748] I [MSGID:
>>> 106492] [glusterd-handler.c:2784:__glusterd_handle_friend_update]
>>> 0-glusterd: Received friend update from uuid:
>>> 9885f122-6242-4ad8-96ee-3a8e25c2d98e [2016-12-26 13:46:56.806969] I [MSGID:
>>> 106502] [glusterd-handler.c:2829:__glusterd_handle_friend_update]
>>> 0-management: Received my uuid as Friend [2016-12-26 13:46:56.808523] I
>>> [MSGID: 106493] [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk]
>>> 0-management: Received ACC from uuid: 9885f122-6242-4ad8-96ee-3a8e25c2d98e
>>> [2016-12-26 13:46:57.439163] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:476:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
>>> from uuid: 59500674-750f-4e16-aeea-4a99fd67218a, host:
>>> bd-reactive-worker-1, port: 0 [2016-12-26 13:46:57.443271] I [MSGID:
>>> 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs
>>> already stopped [2016-12-26 13:46:57.443317] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: nfs service is
>>> stopped [2016-12-26 13:46:57.444603] I [MSGID: 106568]
>>> [glusterd-proc-mgmt.c:87:glusterd_proc_stop] 0-management: Stopping
>>> glustershd daemon running in pid: 17790 [2016-12-26 13:46:58.444802] I
>>> [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management:
>>> glustershd service is stopped [2016-12-26 13:46:58.444867] I [MSGID:
>>> 106567] [glusterd-svc-mgmt.c:196:glusterd_svc_start] 0-management: Starting
>>> glustershd service [2016-12-26 13:46:58.448158] W
>>> [socket.c:3065:socket_connect] 0-glustershd: Ignore failed connection
>>> attempt on , (No such file or directory) [2016-12-26 13:46:58.448293] I
>>> [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management:
>>> quotad already stopped [2016-12-26 13:46:58.448322] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is
>>> stopped [2016-12-26 13:46:58.448378] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already
>>> stopped [2016-12-26 13:46:58.448396] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is
>>> stopped [2016-12-26 13:46:58.448447] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already
>>> stopped [2016-12-26 13:46:58.448464] I [MSGID: 106568]
>>> [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is
>>> stopped [2016-12-26 13:46:58.448523] I [MSGID: 106487]
>>> [glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd:
>>> Received cli list req [2016-12-26 13:46:58.482252] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management:
>>> Received ACC from uuid: 59500674-750f-4e16-aeea-4a99fd67218a [2016-12-26
>>> 13:46:58.484951] I [MSGID: 106163]
>>> [glusterd-handshake.c:1271:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30800 [2016-12-26 13:46:58.492305] I
>>> [MSGID: 106490]
>>> [glusterd-handler.c:2608:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: 59500674-750f-4e16-aeea-4a99fd67218a [2016-12-26
>>> 13:46:58.493713] I [MSGID: 106493]
>>> [glusterd-handler.c:3852:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to bd-reactive-worker-1 (0), ret: 0, op_ret: 0 [2016-12-26
>>> 13:46:58.501512] I [MSGID: 106492]
>>> [glusterd-handler.c:2784:__glusterd_handle_friend_update] 0-glusterd:
>>> Received friend update from uuid: 59500674-750f-4e16-aeea-4a99fd67218a
>>> [2016-12-26 13:46:58.503348] I [MSGID: 106502]
>>> [glusterd-handler.c:2829:__glusterd_handle_friend_update] 0-management:
>>> Received my uuid as Friend [2016-12-26 13:46:58.509794] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:691:__glusterd_friend_update_cbk] 0-management:
>>> Received ACC from uuid: 59500674-750f-4e16-aeea-4a99fd67218a [2016-12-26
>>> 13:47:04.057563] I [MSGID: 106143] [glusterd-pmap.c:227:pmap_registry_bind]
>>> 0-pmap: adding brick /srv/gluster/data/small/brick3 on port 49157
>>> [2016-12-26 13:47:04.058477] I [MSGID: 106143]
>>> [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick
>>> /srv/gluster/data/large/brick1 on port 49152 [2016-12-26 13:47:04.059496] I
>>> [MSGID: 106143] [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding
>>> brick /srv/gluster/data/small/brick1 on port 49155 [2016-12-26
>>> 13:47:04.059546] I [MSGID: 106143] [glusterd-pmap.c:227:pmap_registry_bind]
>>> 0-pmap: adding brick /srv/gluster/data/large/brick3 on port 49154
>>> [2016-12-26 13:47:04.072431] I [MSGID: 106143]
>>> [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick
>>> /srv/gluster/data/small/brick2 on port 49156 [2016-12-26 13:47:04.262372] I
>>> [MSGID: 106143] [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding
>>> brick /srv/gluster/data/large/brick2 on port 49153 [2016-12-26
>>> 13:47:59.970037] I [MSGID: 106499]
>>> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
>>> Received status volume req for volume reactive_large [2016-12-26
>>> 13:47:59.978405] I [MSGID: 106499]
>>> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
>>> Received status volume req for volume reactive_small *
>>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161229/6e8e4c40/attachment.html>


More information about the Gluster-users mailing list