<div><br><div class="gmail_quote"><div dir="auto">On Mon, 4 Sep 2017 at 20:04, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have been using a 60 server 1560 brick 3.7.11 cluster without<br>
problems for 1 years. I did not see this problem with it.<br>
Note that this problem does not happen when I install packages & start<br>
glusterd & peer probe and create the volumes. But after glusterd<br>
restart.<br>
<br>
Also note that this still happens without any volumes. So it is not<br>
related with brick count I think...</blockquote><div dir="auto"><br></div><div dir="auto">The backtrace you shared earlier involves code path where all brick details are synced up. So I'd be really interested to see the backtrace of this when there are no volumes associated.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
On Mon, Sep 4, 2017 at 5:08 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>> wrote:<br>
><br>
><br>
> On Mon, Sep 4, 2017 at 5:28 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> >1. On 80 nodes cluster, did you reboot only one node or multiple ones?<br>
>> Tried both, result is same, but the logs/stacks are from stopping and<br>
>> starting glusterd only on one server while others are running.<br>
>><br>
>> >2. Are you sure that pstack output was always constantly pointing on<br>
>> > strcmp being stuck?<br>
>> It stays 70-80 minutes in %100 cpu consuming state, the stacks I send<br>
>> is from first 5-10 minutes. I will capture stack traces with 10<br>
>> minutes waits and send them to you tomorrow. Also with 40 servers It<br>
>> stays that way for 5 minutes and then returns to normal.<br>
>><br>
>> >3. Are you absolutely sure even after few hours glusterd is stuck at the<br>
>> > same point?<br>
>> It goes to normal state after 70-80 minutes and I can run cluster<br>
>> commands after that. I will check this again to be sure..<br>
><br>
><br>
> So this is scalability issue you're hitting with current glusterd's design.<br>
> As I mentioned earlier, peer handshaking can be a really costly operations<br>
> based on you scale the cluster and hence you might experience a huge delay<br>
> in the node bringing up all the services and be operational.<br>
><br>
>><br>
>> On Mon, Sep 4, 2017 at 1:43 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
>> wrote:<br>
>> ><br>
>> ><br>
>> > On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <<a href="mailto:mchangir@redhat.com" target="_blank">mchangir@redhat.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Serkan,<br>
>> >> I have gone through other mails in the mail thread as well but<br>
>> >> responding<br>
>> >> to this one specifically.<br>
>> >><br>
>> >> Is this a source install or an RPM install ?<br>
>> >> If this is an RPM install, could you please install the<br>
>> >> glusterfs-debuginfo RPM and retry to capture the gdb backtrace.<br>
>> >><br>
>> >> If this is a source install, then you'll need to configure the build<br>
>> >> with<br>
>> >> --enable-debug and reinstall and retry capturing the gdb backtrace.<br>
>> >><br>
>> >> Having the debuginfo package or a debug build helps to resolve the<br>
>> >> function names and/or line numbers.<br>
>> >> --<br>
>> >> Milind<br>
>> >><br>
>> >><br>
>> >><br>
>> >> On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >> wrote:<br>
>> >>><br>
>> >>> Here you can find 10 stack trace samples from glusterd. I wait 10<br>
>> >>> seconds between each trace.<br>
>> >>> <a href="https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0</a><br>
>> >>><br>
>> >>> Content of the first stack trace is here:<br>
>> >>><br>
>> >>> Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)):<br>
>> >>> #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0<br>
>> >>> #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0<br>
>> >>> #2 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #3 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 7 (Thread 0x7f7a8c34d700 (LWP 43070)):<br>
>> >>> #0 0x0000003aa5c0f585 in sigwait () from /lib64/libpthread.so.0<br>
>> >>> #1 0x000000000040643b in glusterfs_sigwaiter ()<br>
>> >>> #2 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #3 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 6 (Thread 0x7f7a8b94c700 (LWP 43071)):<br>
>> >>> #0 0x0000003aa58acc4d in nanosleep () from /lib64/libc.so.6<br>
>> >>> #1 0x0000003aa58acac0 in sleep () from /lib64/libc.so.6<br>
>> >>> #2 0x000000303f8528fb in pool_sweeper () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #3 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #4 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 5 (Thread 0x7f7a8af4b700 (LWP 43072)):<br>
>> >>> #0 0x0000003aa5c0ba5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br>
>> >>> /lib64/libpthread.so.0<br>
>> >>> #1 0x000000303f864afc in syncenv_task () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #2 0x000000303f8729f0 in syncenv_processor () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #3 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #4 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 4 (Thread 0x7f7a8a54a700 (LWP 43073)):<br>
>> >>> #0 0x0000003aa5c0ba5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br>
>> >>> /lib64/libpthread.so.0<br>
>> >>> #1 0x000000303f864afc in syncenv_task () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #2 0x000000303f8729f0 in syncenv_processor () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #3 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #4 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 3 (Thread 0x7f7a886ac700 (LWP 43075)):<br>
>> >>> #0 0x0000003aa5c0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from<br>
>> >>> /lib64/libpthread.so.0<br>
>> >>> #1 0x00007f7a898a099b in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #2 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #3 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 2 (Thread 0x7f7a87cab700 (LWP 43076)):<br>
>> >>> #0 0x0000003aa5928692 in __strcmp_sse42 () from /lib64/libc.so.6<br>
>> >>> #1 0x000000303f82244a in ?? () from /usr/lib64/libglusterfs.so.0<br>
>> >>> #2 0x000000303f82433d in ?? () from /usr/lib64/libglusterfs.so.0<br>
>> >>> #3 0x000000303f8245f5 in dict_set () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #4 0x000000303f82524c in dict_set_str () from<br>
>> >>> /usr/lib64/libglusterfs.so.0<br>
>> >>> #5 0x00007f7a898da7fd in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #6 0x00007f7a8981b0df in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #7 0x00007f7a8981b47c in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #8 0x00007f7a89831edf in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #9 0x00007f7a897f28f7 in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #10 0x00007f7a897f0bb9 in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #11 0x00007f7a8984c89a in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #12 0x00007f7a898323ee in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so<br>
>> >>> #13 0x000000303f40fad5 in rpc_clnt_handle_reply () from<br>
>> >>> /usr/lib64/libgfrpc.so.0<br>
>> >>> #14 0x000000303f410c85 in rpc_clnt_notify () from<br>
>> >>> /usr/lib64/libgfrpc.so.0<br>
>> >>> #15 0x000000303f40bd68 in rpc_transport_notify () from<br>
>> >>> /usr/lib64/libgfrpc.so.0<br>
>> >>> #16 0x00007f7a88a6fccd in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/rpc-transport/socket.so<br>
>> >>> #17 0x00007f7a88a70ffe in ?? () from<br>
>> >>> /usr/lib64/glusterfs/3.10.5/rpc-transport/socket.so<br>
>> >>> #18 0x000000303f887806 in ?? () from /usr/lib64/libglusterfs.so.0<br>
>> >>> #19 0x0000003aa5c07aa1 in start_thread () from /lib64/libpthread.so.0<br>
>> >>> #20 0x0000003aa58e8bbd in clone () from /lib64/libc.so.6<br>
>> >>> Thread 1 (Thread 0x7f7a93844740 (LWP 43068)):<br>
>> >>> #0 0x0000003aa5c082fd in pthread_join () from /lib64/libpthread.so.0<br>
>> >>> #1 0x000000303f8872d5 in ?? () from /usr/lib64/libglusterfs.so.0<br>
>> >>> #2 0x0000000000409020 in main ()<br>
>> ><br>
>> ><br>
>> > Serkan,<br>
>> ><br>
>> > If you could answer the following questions, that would help us to debug<br>
>> > this issue further:<br>
>> ><br>
>> > 1. On 80 nodes cluster, did you reboot only one node or multiple ones?<br>
>> > 2. Are you sure that pstack output was always constantly pointing on<br>
>> > strcmp<br>
>> > being stuck? The reason I ask this is because on 80 nodes setup, friend<br>
>> > handshake operation would be very costly due to the existing design of<br>
>> > glusterd following n square mesh communication approach and making sure<br>
>> > all<br>
>> > the configuration data is consistent across and this is the exact reason<br>
>> > why<br>
>> > we want to move to GlusterD2.<br>
>> > 3. Are you absolutely sure even after few hours glusterd is stuck at the<br>
>> > same point?<br>
>> ><br>
>> > Looking at the backtrace, I don't find any reason why a strcmp will be<br>
>> > stuck<br>
>> > until and unless we're try to read through all the bricks (1600 X 3) X<br>
>> > 79<br>
>> > times.<br>
>> ><br>
>> >>><br>
>> >>> On Wed, Aug 23, 2017 at 8:46 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
>> >>> wrote:<br>
>> >>> > Could you be able to provide the pstack dump of the glusterd<br>
>> >>> > process?<br>
>> >>> ><br>
>> >>> > On Wed, 23 Aug 2017 at 20:22, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
>> >>> > wrote:<br>
>> >>> >><br>
>> >>> >> Not yet. Gaurav will be taking a look at it tomorrow.<br>
>> >>> >><br>
>> >>> >> On Wed, 23 Aug 2017 at 20:14, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >>> >> wrote:<br>
>> >>> >>><br>
>> >>> >>> Hi Atin,<br>
>> >>> >>><br>
>> >>> >>> Do you have time to check the logs?<br>
>> >>> >>><br>
>> >>> >>> On Wed, Aug 23, 2017 at 10:02 AM, Serkan Çoban<br>
>> >>> >>> <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >>> >>> wrote:<br>
>> >>> >>> > Same thing happens with 3.12.rc0. This time perf top shows<br>
>> >>> >>> > hanging<br>
>> >>> >>> > in<br>
>> >>> >>> > libglusterfs.so and below is the glusterd logs, which are<br>
>> >>> >>> > different<br>
>> >>> >>> > from 3.10.<br>
>> >>> >>> > With 3.10.5, after 60-70 minutes CPU usage becomes normal and we<br>
>> >>> >>> > see<br>
>> >>> >>> > brick processes come online and system starts to answer commands<br>
>> >>> >>> > like<br>
>> >>> >>> > "gluster peer status"..<br>
>> >>> >>> ><br>
>> >>> >>> > [2017-08-23 06:46:02.150472] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.152181] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.152287] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.153503] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.153647] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.153866] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.153948] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154018] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154108] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154162] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154250] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154322] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154425] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154494] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154575] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154649] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154705] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154774] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154852] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154903] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.154995] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.155052] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:02.155141] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:27.074052] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> > [2017-08-23 06:46:27.077034] E [client_t.c:324:gf_client_ref]<br>
>> >>> >>> > (-->/usr/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf1)<br>
>> >>> >>> > [0x7f5ae2c091b1]<br>
>> >>> >>> > -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)<br>
>> >>> >>> > [0x7f5ae2c0851c]<br>
>> >>> >>> > -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)<br>
>> >>> >>> > [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]<br>
>> >>> >>> ><br>
>> >>> >>> > On Tue, Aug 22, 2017 at 7:00 PM, Serkan Çoban<br>
>> >>> >>> > <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >>> >>> > wrote:<br>
>> >>> >>> >> I reboot multiple times, also I destroyed the gluster<br>
>> >>> >>> >> configuration<br>
>> >>> >>> >> and recreate multiple times. The behavior is same.<br>
>> >>> >>> >><br>
>> >>> >>> >> On Tue, Aug 22, 2017 at 6:47 PM, Atin Mukherjee<br>
>> >>> >>> >> <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
>> >>> >>> >> wrote:<br>
>> >>> >>> >>> My guess is there is a corruption in vol list or peer list<br>
>> >>> >>> >>> which<br>
>> >>> >>> >>> has<br>
>> >>> >>> >>> lead<br>
>> >>> >>> >>> glusterd to get into a infinite loop of traversing a<br>
>> >>> >>> >>> peer/volume<br>
>> >>> >>> >>> list<br>
>> >>> >>> >>> and<br>
>> >>> >>> >>> CPU to hog up. Again this is a guess and I've not got a chance<br>
>> >>> >>> >>> to<br>
>> >>> >>> >>> take a<br>
>> >>> >>> >>> detail look at the logs and the strace output.<br>
>> >>> >>> >>><br>
>> >>> >>> >>> I believe if you get to reboot the node again the problem will<br>
>> >>> >>> >>> disappear.<br>
>> >>> >>> >>><br>
>> >>> >>> >>> On Tue, 22 Aug 2017 at 20:07, Serkan Çoban<br>
>> >>> >>> >>> <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >>> >>> >>> wrote:<br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> As an addition perf top shows %80 <a href="http://libc-2.12.so" rel="noreferrer" target="_blank">libc-2.12.so</a> __strcmp_sse42<br>
>> >>> >>> >>>> during<br>
>> >>> >>> >>>> glusterd %100 cpu usage<br>
>> >>> >>> >>>> Hope this helps...<br>
>> >>> >>> >>>><br>
>> >>> >>> >>>> On Tue, Aug 22, 2017 at 2:41 PM, Serkan Çoban<br>
>> >>> >>> >>>> <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> >>> >>> >>>> wrote:<br>
>> >>> >>> >>>> > Hi there,<br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> > I have a strange problem.<br>
>> >>> >>> >>>> > Gluster version in 3.10.5, I am testing new servers.<br>
>> >>> >>> >>>> > Gluster<br>
>> >>> >>> >>>> > configuration is 16+4 EC, I have three volumes, each have<br>
>> >>> >>> >>>> > 1600<br>
>> >>> >>> >>>> > bricks.<br>
>> >>> >>> >>>> > I can successfully create the cluster and volumes without<br>
>> >>> >>> >>>> > any<br>
>> >>> >>> >>>> > problems. I write data to cluster from 100 clients for 12<br>
>> >>> >>> >>>> > hours<br>
>> >>> >>> >>>> > again<br>
>> >>> >>> >>>> > no problem. But when I try to reboot a node, glusterd<br>
>> >>> >>> >>>> > process<br>
>> >>> >>> >>>> > hangs on<br>
>> >>> >>> >>>> > %100 CPU usage and seems to do nothing, no brick processes<br>
>> >>> >>> >>>> > come<br>
>> >>> >>> >>>> > online. You can find strace of glusterd process for 1<br>
>> >>> >>> >>>> > minutes<br>
>> >>> >>> >>>> > here:<br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> > <a href="https://www.dropbox.com/s/c7bxfnbqxze1yus/gluster_strace.out?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/c7bxfnbqxze1yus/gluster_strace.out?dl=0</a><br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> > Here is the glusterd logs:<br>
>> >>> >>> >>>> > <a href="https://www.dropbox.com/s/hkstb3mdeil9a5u/glusterd.log?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/hkstb3mdeil9a5u/glusterd.log?dl=0</a><br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> ><br>
>> >>> >>> >>>> > By the way, reboot of one server completes without problem<br>
>> >>> >>> >>>> > if<br>
>> >>> >>> >>>> > I<br>
>> >>> >>> >>>> > reboot<br>
>> >>> >>> >>>> > the servers before creating any volumes.<br>
>> >>> >>> >>>> _______________________________________________<br>
>> >>> >>> >>>> Gluster-users mailing list<br>
>> >>> >>> >>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> >>> >>> >>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >>> >>> >>><br>
>> >>> >>> >>> --<br>
>> >>> >>> >>> - Atin (atinm)<br>
>> >>> >><br>
>> >>> >> --<br>
>> >>> >> - Atin (atinm)<br>
>> >>> ><br>
>> >>> > --<br>
>> >>> > - Atin (atinm)<br>
>> >>> _______________________________________________<br>
>> >>> Gluster-users mailing list<br>
>> >>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> >>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Milind<br>
>> >><br>
>> ><br>
><br>
><br>
</blockquote></div></div><div dir="ltr">-- <br></div><div class="gmail_signature" data-smartmail="gmail_signature">- Atin (atinm)</div>