[Gluster-users] Using the host name of the volume, its relatedcommands can become very slow

陈曦 chenxi at shudun.com
Tue Jan 16 13:07:56 UTC 2018


Thanks for your quick response


I have executed some commond again,reproduce the problem.
Execute this command on all nodes(3 nodes)


[root at f08n25glusterfs]# rm -rf /var/log/glusterfs/*


[root at f08n25glusterfs]# rm -rf /var/lib/glusterd/*


[root at f08n25glusterfs]# service glusterd restart
Restarting glusterd (via systemctl):                       [  OK  ]



Execute this command on one nodes


[root at f08n25/]# ping f08n33 -c 10
PING f08n33 (10.33.0.31) 56(84) bytes of data.
64 bytes from f08n33 (10.33.0.31): icmp_seq=1 ttl=64 time=0.064 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=4 ttl=64 time=0.051 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=5 ttl=64 time=0.062 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=6 ttl=64 time=0.058 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=7 ttl=64 time=0.071 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=8 ttl=64 time=0.089 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=9 ttl=64 time=0.061 ms
64 bytes from f08n33 (10.33.0.31): icmp_seq=10 ttl=64 time=0.075 ms


--- f08n33 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.049/0.063/0.089/0.012 ms


[root at f08n25/]# ping f08n29 -c 10
PING f08n29 (10.33.0.30) 56(84) bytes of data.
64 bytes from f08n29 (10.33.0.30): icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=3 ttl=64 time=0.058 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=4 ttl=64 time=0.081 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=5 ttl=64 time=0.059 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=6 ttl=64 time=0.055 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=7 ttl=64 time=0.056 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=8 ttl=64 time=0.060 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=9 ttl=64 time=0.057 ms
64 bytes from f08n29 (10.33.0.30): icmp_seq=10 ttl=64 time=0.061 ms


--- f08n29 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9016ms
rtt min/avg/max/mdev = 0.055/0.060/0.081/0.012 ms






[root at f08n25/]# time gluster peer probe f08n33
peer probe: success. 


real	0m0.264s
user	0m0.190s
sys	0m0.020s
[root at f08n25/]# time gluster peer probe f08n29
peer probe: success. 


real	0m0.265s
user	0m0.190s
sys	0m0.010s


[root at f08n25/]# time gluster volume create test f08n33:/data/gluster/bricks{1..5} f08n29:/data/gluster/bricks{1..5} f08n25:/data/gluster/bricks{1..5} force 
volume create: test: success: please start the volume to access data


real	0m38.601s
user	0m0.170s
sys	0m0.030s
[root at f08n25/]# time gluster volume start test
volume start: test: success


real	0m8.055s
user	0m0.140s
sys	0m0.060s
[root at f08n25/]# time gluster volume quota test enable
volume quota : success


real	0m17.867s
user	0m0.180s
sys	0m0.020s


[root at f08n25/]# time gluster volume quota test limit-usage / 10GB
volume quota : success


real	0m0.880s
user	0m0.180s
sys	0m0.020s
[root at f08n25/]# time gluster volume stop test
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: test: success


real	0m11.971s
user	0m0.180s
sys	0m0.020s
[root at f08n25/]# time gluster volume start test
volume start: test: success


real	0m15.165s
user	0m0.190s
sys	0m0.010s
[root at f08n25/]# time gluster volume set test nfs.disable ON
volume set: success


real	0m25.123s
user	0m0.190s
sys	0m0.010s



[root at f08n25glusterfs]# uname -a
Linux f08n25 4.2.0 #1 SMP Tue Jun 7 01:18:20 CST 2016 aarch64 aarch64 aarch64 GNU/Linux


[root at f08n25glusterfs]# cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (AltArch) 


[root at f08n25glusterfs]# glusterfs --version
glusterfs 3.7.20 built on Jan 30 2017 16:22:41
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.





attachment is the log of 3 nodes 




I can see some errors in the log,such as ,
in quotad.log:
[2017-12-01 01:42:07.329344] E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk] 0-test-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.


and in etc-glusterfs-glusterd.vol.log:
[2017-12-01 01:31:29.196038] E [MSGID: 106243] [glusterd.c:1656:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
and so on


Does this cause gluster to run wrong?




thank you very much
Look forward to your favourable reply
chenx

------------------ Original ------------------
From:  "Atin Mukherjee"<amukherj at redhat.com>;
Date:  Tue, Jan 16, 2018 11:23 AM
To:  "陈曦"<chenxi at shudun.com>; 
Cc:  "gluster-users"<gluster-users at gluster.org>; 
Subject:  Re: [Gluster-users] Using the host name of the volume, its relatedcommands can become very slow

 


On Mon, Jan 15, 2018 at 6:30 PM, 陈曦 <chenxi at shudun.com> wrote:
Using the host name of the volume, its related gluster commands can become very slow .For example,create,start,stop volume,nfs related commands. and some time And in some cases, the command will return Error : Request timed out
but If using ip address to create the volume. The volume all gluster commands  are normal.


I have  configured /etc/hosts correctly,Because,SSH can normally use the hostname to access another node from one node. And,I can ping other nodes use hostanme。


this is some logs:
[root at f08n25glusterfs]# tail -n 50 data-gluster-test.log 
[2018-02-03 13:53:22.777184] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=test /data/gluster/test)
[2018-02-03 13:53:22.810249] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-03 13:53:22.811289] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2018-02-03 13:53:22.811323] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:test)
[2018-02-03 13:53:22.811847] W [glusterfsd.c:1251:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa0) [0xffffad0cff84] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f8) [0xaaaad6e1f734] -->/usr/sbin/glusterfs(cleanup_and_exit+0x78) [0xaaaad6e199fc] ) 0-: received signum (0), shutting down
[2018-02-03 13:53:22.811892] I [fuse-bridge.c:5720:fini] 0-fuse: Unmounting '/data/gluster/test'.
[2018-02-03 13:53:22.914173] W [glusterfsd.c:1251:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7bb0) [0xffffaceb7bb0] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0x11c) [0xaaaad6e19bc0] -->/usr/sbin/glusterfs(cleanup_and_exit+0x78) [0xaaaad6e199fc] ) 0-: received signum (15), shutting down



[root at f08n25glusterfs]# tail -n 50 etc-glusterfs-glusterd.vol.log 
[2018-02-03 13:55:09.106663] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d5da7794-2654-4845-b8ba-a2ed01c04b41/W_W on port 49195
[2018-02-03 13:55:09.108871] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d5da7794-2654-4845-b8ba-a2ed01c04b41/QAQ on port 49183
[2018-02-03 13:55:09.111075] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d5da7794-2654-4845-b8ba-a2ed01c04b41/v_v on port 49171
[2018-02-03 13:55:09.113281] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d60a681a-af2a-40ae-9732-9902bd2be614/QAQ on port 49184
[2018-02-03 13:55:09.205465] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/dfd7e810-d4f8-4b4b-b5d1-53e5652f1a8f/OTZ on port 49210
[2018-02-03 13:55:09.207655] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/c26580f1-e879-4225-8b3b-e3a3738130a0/A_s on port 49219
[2018-02-03 13:55:09.209824] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d60a681a-af2a-40ae-9732-9902bd2be614/v_v on port 49172
[2018-02-03 13:55:09.212030] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/00812b9d-0c7a-4960-a9b9-389a98424bce/W_W on port 49188
[2018-02-03 13:55:09.214255] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d60a681a-af2a-40ae-9732-9902bd2be614/W_W on port 49196
[2018-02-03 13:55:09.216464] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d793e3ff-220b-4d94-865c-99ad16953403/QAQ on port 49185
[2018-02-03 13:55:09.218656] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/37313921-a9b2-41f1-b2ff-493485c2b449/OTZ on port 49203
[2018-02-03 13:55:09.220827] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/00812b9d-0c7a-4960-a9b9-389a98424bce/Q on port 49152
[2018-02-03 13:55:09.223005] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d5da7794-2654-4845-b8ba-a2ed01c04b41/Q on port 49159
[2018-02-03 13:55:09.225209] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d793e3ff-220b-4d94-865c-99ad16953403/v_v on port 49173
[2018-02-03 13:55:09.227452] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/dfd7e810-d4f8-4b4b-b5d1-53e5652f1a8f/QAQ on port 49186
[2018-02-03 13:55:09.229654] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2a5f45bd-28a8-44a1-bb0b-594294b9c5c4/W_W on port 49189
[2018-02-03 13:55:09.231855] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d793e3ff-220b-4d94-865c-99ad16953403/W_W on port 49197
[2018-02-03 13:55:09.234073] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/dfd7e810-d4f8-4b4b-b5d1-53e5652f1a8f/v_v on port 49174
[2018-02-03 13:55:09.236285] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/f7b86039-c3e9-425f-8d9e-d063b380058e/OTZ on port 49211
[2018-02-03 13:55:09.238463] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/00812b9d-0c7a-4960-a9b9-389a98424bce/A_s on port 49212
[2018-02-03 13:55:09.240933] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2a5f45bd-28a8-44a1-bb0b-594294b9c5c4/Q on port 49153
[2018-02-03 13:55:09.243114] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d5da7794-2654-4845-b8ba-a2ed01c04b41/A_s on port 49220
[2018-02-03 13:55:09.245295] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/55e4259d-8fc9-456b-861f-617be3683cd0/OTZ on port 49204
[2018-02-03 13:55:09.247480] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/f7b86039-c3e9-425f-8d9e-d063b380058e/QAQ on port 49187
[2018-02-03 13:55:09.249670] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d60a681a-af2a-40ae-9732-9902bd2be614/Q on port 49160
[2018-02-03 13:55:09.251854] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/f7b86039-c3e9-425f-8d9e-d063b380058e/v_v on port 49175
[2018-02-03 13:55:09.254376] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2f74a970-372c-4fda-95e5-5d0de352966d/W_W on port 49190
[2018-02-03 13:55:09.256604] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2a5f45bd-28a8-44a1-bb0b-594294b9c5c4/A_s on port 49213
[2018-02-03 13:55:09.258780] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/dfd7e810-d4f8-4b4b-b5d1-53e5652f1a8f/W_W on port 49198
[2018-02-03 13:55:09.260979] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/a2a12678-7a4e-4e0a-8e4e-a96dedc0217d/OTZ on port 49205
[2018-02-03 13:55:09.263155] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2f74a970-372c-4fda-95e5-5d0de352966d/Q on port 49154
[2018-02-03 13:55:09.265360] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d793e3ff-220b-4d94-865c-99ad16953403/Q on port 49161
[2018-02-03 13:55:09.267556] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d60a681a-af2a-40ae-9732-9902bd2be614/A_s on port 49221
[2018-02-03 13:55:09.269730] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/37313921-a9b2-41f1-b2ff-493485c2b449/W_W on port 49191
[2018-02-03 13:55:09.271933] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/00812b9d-0c7a-4960-a9b9-389a98424bce/QAQ on port 49176
[2018-02-03 13:55:09.274159] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/f7b86039-c3e9-425f-8d9e-d063b380058e/W_W on port 49199
[2018-02-03 13:55:09.276368] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/00812b9d-0c7a-4960-a9b9-389a98424bce/v_v on port 49164
[2018-02-03 13:55:09.278578] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/d793e3ff-220b-4d94-865c-99ad16953403/A_s on port 49222
[2018-02-03 13:55:09.280747] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/2f74a970-372c-4fda-95e5-5d0de352966d/A_s on port 49215
[2018-02-03 13:55:09.282913] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/37313921-a9b2-41f1-b2ff-493485c2b449/Q on port 49155
[2018-02-03 13:55:09.285117] I [MSGID: 106143] [glusterd-pmap.c:231:pmap_registry_bind] 0-pmap: adding brick /data/gluster/c26580f1-e879-4225-8b3b-e3a3738130a0/OTZ on port 49206
[2018-02-03 13:55:09.315499] I [MSGID: 106490] [glusterd-handler.c:2600:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: d8ee609f-d47e-46e4-96ac-ef78cddf45b8
[2018-02-03 13:55:09.325999] I [MSGID: 106493] [glusterd-handler.c:3843:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to f08n29 (0), ret: 0, op_ret: 0
[2018-02-03 13:55:09.331799] I [MSGID: 106492] [glusterd-handler.c:2776:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: d8ee609f-d47e-46e4-96ac-ef78cddf45b8
[2018-02-03 13:55:09.331876] I [MSGID: 106502] [glusterd-handler.c:2821:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-03 13:55:09.333350] I [MSGID: 106493] [glusterd-rpc-ops.c:696:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: d8ee609f-d47e-46e4-96ac-ef78cddf45b8
[2018-02-03 13:55:38.833207] I [MSGID: 106487] [glusterd-handler.c:1472:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2018-02-03 13:57:09.161466] I [MSGID: 106487] [glusterd-handler.c:1472:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2018-02-03 13:58:39.566130] I [MSGID: 106487] [glusterd-handler.c:1472:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2018-02-03 14:00:09.918411] I [MSGID: 106487] [glusterd-handler.c:1472:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req



The snippet of the glusterd log file what you pointed is during the handshaking. What we'd need to check the logs for a particular transaction. Would you be able to share the glusterd log files from all the nodes along with the cmd_history.log from the node where the commands were initiated?
 





[root at f08n25glusterfs]# tail -n 50 glustershd.log 
2478:     option remote-host 10.33.0.8
2479:     option remote-subvolume /data/gluster/ec025105-5ef2-46f2-b1cf-01a05daa237e/QAQ
2480:     option transport-type socket
2481:     option username 2a31c3ba-696d-4d49-9dee-540e47791e4a
2482:     option password 3f751501-3148-42bf-8c55-bfe1b3ddc747
2483: end-volume
2484:  
2485: volume QAQ-replicate-11
2486:     type cluster/replicate
2487:     option node-uuid 194ca8ea-df5b-4d5e-9af6-dbf36c485334
2488:     option background-self-heal-count 0
2489:     option metadata-self-heal on
2490:     option data-self-heal on
2491:     option entry-self-heal on
2492:     option self-heal-daemon enable
2493:     option iam-self-heal-daemon yes
2494:     subvolumes QAQ-client-33 QAQ-client-34 QAQ-client-35
2495: end-volume
2496:  
2497: volume glustershd
2498:     type debug/io-stats
2499:     option log-level INFO
2500:     subvolumes A_s-replicate-0 A_s-replicate-1 A_s-replicate-2 A_s-replicate-3 A_s-replicate-4 A_s-replicate-5 A_s-replicate-6 A_s-replicate-7 A_s-replicate-8 A_s-replicate-9 A_s-replicate-10 A_s-replicate-11 OTZ-disperse-0 OTZ-disperse-1 OTZ-disperse-2 OTZ-disperse-3 OTZ-disperse-4 OTZ-disperse-5 OTZ-disperse-6 OTZ-disperse-7 OTZ-disperse-8 OTZ-disperse-9 OTZ-disperse-10 OTZ-disperse-11 Q-disperse-0 Q-disperse-1 Q-disperse-2 Q-disperse-3 Q-disperse-4 Q-disperse-5 Q-disperse-6 Q-disperse-7 Q-disperse-8 Q-disperse-9 Q-disperse-10 Q-disperse-11 QAQ-replicate-0 QAQ-replicate-1 QAQ-replicate-2 QAQ-replicate-3 QAQ-replicate-4 QAQ-replicate-5 QAQ-replicate-6 QAQ-replicate-7 QAQ-replicate-8 QAQ-replicate-9 QAQ-replicate-10 QAQ-replicate-11
2501: end-volume
2502:  
+------------------------------------------------------------------------------+
[2018-02-03 13:55:20.771993] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-0: Going UP
[2018-02-03 13:55:20.772100] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-1: Going UP
[2018-02-03 13:55:20.772157] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-2: Going UP
[2018-02-03 13:55:20.772208] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-3: Going UP
[2018-02-03 13:55:20.772260] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-4: Going UP
[2018-02-03 13:55:20.772311] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-5: Going UP
[2018-02-03 13:55:20.772364] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-6: Going UP
[2018-02-03 13:55:20.772418] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-7: Going UP
[2018-02-03 13:55:20.772468] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-8: Going UP
[2018-02-03 13:55:20.772521] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-9: Going UP
[2018-02-03 13:55:20.772572] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-10: Going UP
[2018-02-03 13:55:20.772626] I [MSGID: 122061] [ec.c:313:ec_up] 0-OTZ-disperse-11: Going UP
[2018-02-03 13:55:20.772679] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-0: Going UP
[2018-02-03 13:55:20.772732] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-1: Going UP
[2018-02-03 13:55:20.772786] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-2: Going UP
[2018-02-03 13:55:20.772838] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-3: Going UP
[2018-02-03 13:55:20.772891] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-4: Going UP
[2018-02-03 13:55:20.772941] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-5: Going UP
[2018-02-03 13:55:20.772994] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-6: Going UP
[2018-02-03 13:55:20.773046] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-7: Going UP
[2018-02-03 13:55:20.773099] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-8: Going UP
[2018-02-03 13:55:20.773150] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-9: Going UP
[2018-02-03 13:55:21.773401] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-10: Going UP
[2018-02-03 13:55:21.773496] I [MSGID: 122061] [ec.c:313:ec_up] 0-Q-disperse-11: Going UP
[root at f08n25glusterfs]# 









_______________________________________________
 Gluster-users mailing list
 Gluster-users at gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180116/cc09f06e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterFS_log_f08n25.tar.gz
Type: application/octet-stream
Size: 31439 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180116/cc09f06e/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterFS_log_f08n29.tar.gz
Type: application/octet-stream
Size: 28884 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180116/cc09f06e/attachment-0001.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterFS_log_f08n33.tar.gz
Type: application/octet-stream
Size: 27681 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180116/cc09f06e/attachment-0002.obj>


More information about the Gluster-users mailing list