<div dir="ltr"><div><div><div><div>Hi Kinglong<br></div></div>I was reading about makecontext/swapcontext as well, <br></div>I did find an article that suggested to use mprotect and force a segfault to check if we have a stack space issue here.<br>here is the link. <a href="http://www.evanjones.ca/software/threading.html">http://www.evanjones.ca/software/threading.html</a>.<br><br></div><div>I don&#39;t think i can try this until tomorrow.<br></div><div>thanks and regards,<br></div><div>Sanoj<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 13, 2017 at 5:58 AM, Kinglong Mee <span dir="ltr">&lt;<a href="mailto:kinglongmee@gmail.com" target="_blank">kinglongmee@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Sanoj,<br>
<br>
What&#39;s your opinion about this problem?<br>
<br>
thanks,<br>
Kinglong Mee<br>
<div class="HOEnZb"><div class="h5"><br>
On 6/9/2017 17:20, Kinglong Mee wrote:<br>
&gt; Hi Sanoj,<br>
&gt;<br>
&gt; On 6/9/2017 15:48, Sanoj Unnikrishnan wrote:<br>
&gt;&gt; I have not used valgrind before, so I may be wrong here.<br>
&gt;&gt;<br>
&gt;&gt; I think the valgrind_stack_deregister should have been after GF_FREE_STACK.<br>
&gt;&gt; That may explain the instance of invalid_write during stack_destroy calls in after.log.<br>
&gt;<br>
&gt; No. I move it, but the instance of invalid_write also exist.<br>
&gt;<br>
&gt;&gt;<br>
&gt;&gt; There seems to be numerous issues reported in before.log (I am assuming, you did not have the valgrind_stack_register call in it),<br>
&gt;<br>
&gt; Yes, the before.log is test without any code change(but without io-threads).<br>
&gt;<br>
&gt;&gt; From <a href="http://valgrind.org/docs/manual/manual-core.html" rel="noreferrer" target="_blank">http://valgrind.org/docs/<wbr>manual/manual-core.html</a> &lt;<a href="http://valgrind.org/docs/manual/manual-core.html" rel="noreferrer" target="_blank">http://valgrind.org/docs/<wbr>manual/manual-core.html</a>&gt;, looks like valgrind detects client switching stack only If a memory of &gt; 2MB change in Stack pointer register.<br>
&gt;<br>
&gt; I test with a larger max-stackframe as,<br>
&gt; valgrind  --leak-check=full --max-stackframe=242293216<br>
&gt;<br>
&gt;&gt; Is it possible that since marker is only using 16k, the stack pointer could have been in less than 2MB offset from current Stack Pointer?<br>
&gt;<br>
&gt; Maybe.<br>
&gt; But with io-threads (with adding valgrind_stack_deregister), the valgrind only show some<br>
&gt; &quot;Invalid read/write&quot; about __gf_mem_invalidate.<br>
&gt; The only reason here I think is the stack size (16K) of marker using.<br>
&gt;<br>
&gt; I have not used makecontext/swapcontext before, am i right?<br>
&gt; 1. without swapconext, the stack maybe (just an example)<br>
&gt;    --&gt; io_stats-&gt; quota-&gt; marker-&gt; io-threads -&gt;.....<br>
&gt;<br>
&gt; 2. with swapcontext,<br>
&gt;    --&gt; io_stats-&gt; quota-&gt; marker<br>
&gt;                swithto new stack -&gt; io-threads<br>
&gt;<br>
&gt; After switchto new stack, the new stack size is 16K,<br>
&gt; Does it enough without io-threads?<br>
&gt;<br>
&gt; I don&#39;t know what&#39;s the behave of io-threads, does it call all sub-xlator using the 16K ? or others?<br>
&gt;<br>
&gt;&gt; It seems unlikely to me since we are allocating the stack from heap.<br>
&gt;&gt;<br>
&gt;&gt; Did you try a run with the valgrind instrumentation, without changing stack size ?<br>
&gt;<br>
&gt; OK.<br>
&gt; The following valgrind-without-stack-change.<wbr>log is test with adding valgrind_stack_deregister<br>
&gt; (and without io-threads).<br>
&gt;<br>
&gt; thanks,<br>
&gt; Kinglong Mee<br>
&gt;<br>
&gt;&gt; None of this explains the crash though.. We had seen a memory overrun crash in same code path on netbsd earlier but did not follow up then.<br>
&gt;&gt; Will look further into it.<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Thu, Jun 8, 2017 at 4:51 PM, Kinglong Mee &lt;<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a> &lt;mailto:<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a>&gt;<wbr>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt;     Maybe it&#39;s my fault, I found valgrind can&#39;t parse context switch(makecontext/<wbr>swapcontext) by default.<br>
&gt;&gt;     So, I test with the following patch (tells valgrind new stack by VALGRIND_STACK_DEREGISTER).<br>
&gt;&gt;     With it, only some &quot;Invalid read/write&quot; by __gf_mem_invalidate, Does it right ??<br>
&gt;&gt;     So, there is only one problem, if without io-threads, the stack size is small for marker.<br>
&gt;&gt;     Am I right?<br>
&gt;&gt;<br>
&gt;&gt;     Ps:<br>
&gt;&gt;     valgrind-before.log is the log without the following patch, the valgrind-after.log is with the patch.<br>
&gt;&gt;<br>
&gt;&gt;     ==35656== Invalid write of size 8<br>
&gt;&gt;     ==35656==    at 0x4E8FFD4: __gf_mem_invalidate (mem-pool.c:278)<br>
&gt;&gt;     ==35656==    by 0x4E90313: __gf_free (mem-pool.c:334)<br>
&gt;&gt;     ==35656==    by 0x4EA4E5B: synctask_destroy (syncop.c:394)<br>
&gt;&gt;     ==35656==    by 0x4EA4EDF: synctask_done (syncop.c:412)<br>
&gt;&gt;     ==35656==    by 0x4EA58B3: synctask_switchto (syncop.c:673)<br>
&gt;&gt;     ==35656==    by 0x4EA596B: syncenv_processor (syncop.c:704)<br>
&gt;&gt;     ==35656==    by 0x60B2DC4: start_thread (in /usr/lib64/<a href="http://libpthread-2.17.so" rel="noreferrer" target="_blank">libpthread-2.17.so</a> &lt;<a href="http://libpthread-2.17.so" rel="noreferrer" target="_blank">http://libpthread-2.17.so</a>&gt;)<br>
&gt;&gt;     ==35656==    by 0x67A873C: clone (in /usr/lib64/<a href="http://libc-2.17.so" rel="noreferrer" target="_blank">libc-2.17.so</a> &lt;<a href="http://libc-2.17.so" rel="noreferrer" target="_blank">http://libc-2.17.so</a>&gt;)<br>
&gt;&gt;     ==35656==  Address 0x1b104931 is 2,068,017 bytes inside a block of size 2,097,224 alloc&#39;d<br>
&gt;&gt;     ==35656==    at 0x4C29975: calloc (vg_replace_malloc.c:711)<br>
&gt;&gt;     ==35656==    by 0x4E8FA5E: __gf_calloc (mem-pool.c:117)<br>
&gt;&gt;     ==35656==    by 0x4EA52F5: synctask_create (syncop.c:500)<br>
&gt;&gt;     ==35656==    by 0x4EA55AE: synctask_new1 (syncop.c:576)<br>
&gt;&gt;     ==35656==    by 0x143AE0D7: mq_synctask1 (marker-quota.c:1078)<br>
&gt;&gt;     ==35656==    by 0x143AE199: mq_synctask (marker-quota.c:1097)<br>
&gt;&gt;     ==35656==    by 0x143AE6F6: _mq_create_xattrs_txn (marker-quota.c:1236)<br>
&gt;&gt;     ==35656==    by 0x143AE82D: mq_create_xattrs_txn (marker-quota.c:1253)<br>
&gt;&gt;     ==35656==    by 0x143B0DCB: mq_inspect_directory_xattr (marker-quota.c:2027)<br>
&gt;&gt;     ==35656==    by 0x143B13A8: mq_xattr_state (marker-quota.c:2117)<br>
&gt;&gt;     ==35656==    by 0x143A6E80: marker_lookup_cbk (marker.c:2961)<br>
&gt;&gt;     ==35656==    by 0x141811E0: up_lookup_cbk (upcall.c:753)<br>
&gt;&gt;<br>
&gt;&gt;     ----------------------- valgrind ------------------------------<wbr>------------<br>
&gt;&gt;<br>
&gt;&gt;     Don&#39;t forget install valgrind-devel.<br>
&gt;&gt;<br>
&gt;&gt;     diff --git a/libglusterfs/src/syncop.c b/libglusterfs/src/syncop.c<br>
&gt;&gt;     index 00a9b57..97b1de1 100644<br>
&gt;&gt;     --- a/libglusterfs/src/syncop.c<br>
&gt;&gt;     +++ b/libglusterfs/src/syncop.c<br>
&gt;&gt;     @@ -10,6 +10,7 @@<br>
&gt;&gt;<br>
&gt;&gt;      #include &quot;syncop.h&quot;<br>
&gt;&gt;      #include &quot;libglusterfs-messages.h&quot;<br>
&gt;&gt;     +#include &lt;valgrind/valgrind.h&gt;<br>
&gt;&gt;<br>
&gt;&gt;      int<br>
&gt;&gt;      syncopctx_setfsuid (void *uid)<br>
&gt;&gt;     @@ -388,6 +389,8 @@ synctask_destroy (struct synctask *task)<br>
&gt;&gt;              if (!task)<br>
&gt;&gt;                      return;<br>
&gt;&gt;<br>
&gt;&gt;     +VALGRIND_STACK_DEREGISTER(<wbr>task-&gt;valgrind_ret);<br>
&gt;&gt;     +<br>
&gt;&gt;              GF_FREE (task-&gt;stack);<br>
&gt;&gt;<br>
&gt;&gt;              if (task-&gt;opframe)<br>
&gt;&gt;     @@ -509,6 +512,8 @@ synctask_create (struct syncenv *env, size_t stacksize, sync<br>
&gt;&gt;<br>
&gt;&gt;              newtask-&gt;ctx.uc_stack.ss_sp   = newtask-&gt;stack;<br>
&gt;&gt;<br>
&gt;&gt;     +       newtask-&gt;valgrind_ret = VALGRIND_STACK_REGISTER(<wbr>newtask-&gt;stack, newtask-<br>
&gt;&gt;     +<br>
&gt;&gt;              makecontext (&amp;newtask-&gt;ctx, (void (*)(void)) synctask_wrap, 2, newtask)<br>
&gt;&gt;<br>
&gt;&gt;              newtask-&gt;state = SYNCTASK_INIT;<br>
&gt;&gt;     diff --git a/libglusterfs/src/syncop.h b/libglusterfs/src/syncop.h<br>
&gt;&gt;     index c2387e6..247325b 100644<br>
&gt;&gt;     --- a/libglusterfs/src/syncop.h<br>
&gt;&gt;     +++ b/libglusterfs/src/syncop.h<br>
&gt;&gt;     @@ -63,6 +63,7 @@ struct synctask {<br>
&gt;&gt;              int                 woken;<br>
&gt;&gt;              int                 slept;<br>
&gt;&gt;              int                 ret;<br>
&gt;&gt;     +       int                 valgrind_ret;<br>
&gt;&gt;<br>
&gt;&gt;              uid_t               uid;<br>
&gt;&gt;              gid_t               gid;<br>
&gt;&gt;     diff --git a/xlators/features/marker/src/<wbr>marker-quota.c b/xlators/features/marke<br>
&gt;&gt;     index 902b8e5..f3d2507 100644<br>
&gt;&gt;     --- a/xlators/features/marker/src/<wbr>marker-quota.c<br>
&gt;&gt;     +++ b/xlators/features/marker/src/<wbr>marker-quota.c<br>
&gt;&gt;     @@ -1075,7 +1075,7 @@ mq_synctask1 (xlator_t *this, synctask_fn_t task, gf_boole<br>
&gt;&gt;              }<br>
&gt;&gt;<br>
&gt;&gt;              if (spawn) {<br>
&gt;&gt;     -                ret = synctask_new1 (this-&gt;ctx-&gt;env, 1024 * 16, task,<br>
&gt;&gt;     +                ret = synctask_new1 (this-&gt;ctx-&gt;env, 0, task,<br>
&gt;&gt;                                            mq_synctask_cleanup, NULL, args);<br>
&gt;&gt;                      if (ret) {<br>
&gt;&gt;                              gf_log (this-&gt;name, GF_LOG_ERROR, &quot;Failed to spawn &quot;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;     On 6/8/2017 19:02, Sanoj Unnikrishnan wrote:<br>
&gt;&gt;     &gt; I would still be worried about the Invalid read/write. IMO whether an illegal access causes a crash depends on whether the page is currently mapped.<br>
&gt;&gt;     &gt; So, it could so happen that there is a use after free / use outside of bounds happening in the code  and  it turns out that this location gets mapped in a different (unmapped) page when IO threads is not loaded.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt; Could you please share the valgrind logs as well.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt; On Wed, Jun 7, 2017 at 8:22 PM, Kinglong Mee &lt;<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a> &lt;mailto:<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a>&gt; &lt;mailto:<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a> &lt;mailto:<a href="mailto:kinglongmee@gmail.com">kinglongmee@gmail.com</a>&gt;<wbr>&gt;&gt; wrote:<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     After deleting io-threads from the vols, quota operates (list/set/modify) lets glusterfsd crash.<br>
&gt;&gt;     &gt;     I use it at CentOS 7 (CentOS Linux release 7.3.1611) with glusterfs 3.8.12.<br>
&gt;&gt;     &gt;     It seems the stack corrupt, when testing with the following diff, glusterfsd runs correctly.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     There are two questions as,<br>
&gt;&gt;     &gt;     1. When using valgrind, it shows there are many &quot;Invalid read/write&quot; when with io-threads.<br>
&gt;&gt;     &gt;        Why glusterfsd runs correctly with io-threads? but crash without io-threads?<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     2. With the following diff, valgrind also shows many &quot;Invalid read/write&quot; when without io-threads?<br>
&gt;&gt;     &gt;        but no any crash.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     Any comments are welcome.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     Revert <a href="http://review.gluster.org/11499" rel="noreferrer" target="_blank">http://review.gluster.org/<wbr>11499</a> &lt;<a href="http://review.gluster.org/11499" rel="noreferrer" target="_blank">http://review.gluster.org/<wbr>11499</a>&gt; &lt;<a href="http://review.gluster.org/11499" rel="noreferrer" target="_blank">http://review.gluster.org/<wbr>11499</a> &lt;<a href="http://review.gluster.org/11499" rel="noreferrer" target="_blank">http://review.gluster.org/<wbr>11499</a>&gt;&gt; seems better than the diff.<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     diff --git a/xlators/features/marker/src/<wbr>marker-quota.c b/xlators/features/marke<br>
&gt;&gt;     &gt;     index 902b8e5..f3d2507 100644<br>
&gt;&gt;     &gt;     --- a/xlators/features/marker/src/<wbr>marker-quota.c<br>
&gt;&gt;     &gt;     +++ b/xlators/features/marker/src/<wbr>marker-quota.c<br>
&gt;&gt;     &gt;     @@ -1075,7 +1075,7 @@ mq_synctask1 (xlator_t *this, synctask_fn_t task, gf_boole<br>
&gt;&gt;     &gt;              }<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;              if (spawn) {<br>
&gt;&gt;     &gt;     -                ret = synctask_new1 (this-&gt;ctx-&gt;env, 1024 * 16, task,<br>
&gt;&gt;     &gt;     +                ret = synctask_new1 (this-&gt;ctx-&gt;env, 0, task,<br>
&gt;&gt;     &gt;                                            mq_synctask_cleanup, NULL, args);<br>
&gt;&gt;     &gt;                      if (ret) {<br>
&gt;&gt;     &gt;                              gf_log (this-&gt;name, GF_LOG_ERROR, &quot;Failed to spawn &quot;<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     ------------------------------<wbr>-----test steps ------------------------------<wbr>----<br>
&gt;&gt;     &gt;     1. gluster volume create gvtest node1:/test/ node2:/test/<br>
&gt;&gt;     &gt;     2. gluster volume start gvtest<br>
&gt;&gt;     &gt;     3. gluster volume quota enable gvtest<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     4. &quot;deletes io-threads from all vols&quot;<br>
&gt;&gt;     &gt;     5. reboot node1 and node2.<br>
&gt;&gt;     &gt;     6. sh quota-set.sh<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     # cat quota-set.sh<br>
&gt;&gt;     &gt;     gluster volume quota gvtest list<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage / 10GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage /1234 1GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage /hello 1GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage /test 1GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage /xyz 1GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest list<br>
&gt;&gt;     &gt;     gluster volume quota gvtest remove /hello<br>
&gt;&gt;     &gt;     gluster volume quota gvtest remove /test<br>
&gt;&gt;     &gt;     gluster volume quota gvtest list<br>
&gt;&gt;     &gt;     gluster volume quota gvtest limit-usage /test 1GB<br>
&gt;&gt;     &gt;     gluster volume quota gvtest remove /xyz<br>
&gt;&gt;     &gt;     gluster volume quota gvtest list<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     -----------------------<wbr>glusterfsd crash without the diff--------------------------<wbr>------<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(_gf_msg_backtrace_nomem+<wbr>0xf5)[0x7f6e1e950af1]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(gf_print_trace+0x21f)[<wbr>0x7f6e1e956943]<br>
&gt;&gt;     &gt;     /usr/local/sbin/glusterfsd(<wbr>glusterfsd_print_trace+0x1f)[<wbr>0x409c83]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x35250)[<wbr>0x7f6e1d025250]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(gsignal+0x37)<wbr>[0x7f6e1d0251d7]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(abort+0x148)[<wbr>0x7f6e1d0268c8]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x74f07)[<wbr>0x7f6e1d064f07]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x7baf5)[<wbr>0x7f6e1d06baf5]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x7c3e6)[<wbr>0x7f6e1d06c3e6]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(__gf_free+0x311)[<wbr>0x7f6e1e981327]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(synctask_destroy+0x82)[<wbr>0x7f6e1e995c20]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(synctask_done+0x25)[<wbr>0x7f6e1e995c47]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(synctask_switchto+0xcf)[<wbr>0x7f6e1e996585]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(syncenv_processor+0x60)[<wbr>0x7f6e1e99663d]<br>
&gt;&gt;     &gt;     /lib64/libpthread.so.0(+<wbr>0x7dc5)[0x7f6e1d7a2dc5]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(clone+0x6d)[<wbr>0x7f6e1d0e773d]<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     or<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     package-string: glusterfs 3.8.12<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(_gf_msg_backtrace_nomem+<wbr>0xf5)[0x7fa15e623af1]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(gf_print_trace+0x21f)[<wbr>0x7fa15e629943]<br>
&gt;&gt;     &gt;     /usr/local/sbin/glusterfsd(<wbr>glusterfsd_print_trace+0x1f)[<wbr>0x409c83]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x35250)[<wbr>0x7fa15ccf8250]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(gsignal+0x37)<wbr>[0x7fa15ccf81d7]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(abort+0x148)[<wbr>0x7fa15ccf98c8]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x74f07)[<wbr>0x7fa15cd37f07]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(+0x7dd4d)[<wbr>0x7fa15cd40d4d]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(__libc_<wbr>calloc+0xb4)[0x7fa15cd43a14]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(__gf_calloc+0xa7)[<wbr>0x7fa15e653a5f]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(iobref_new+0x2b)[<wbr>0x7fa15e65875a]<br>
&gt;&gt;     &gt;     /usr/local/lib/glusterfs/3.8.<wbr>12/rpc-transport/socket.so(+<wbr>0xa98c)[0x7fa153a8398c]<br>
&gt;&gt;     &gt;     /usr/local/lib/glusterfs/3.8.<wbr>12/rpc-transport/socket.so(+<wbr>0xacbc)[0x7fa153a83cbc]<br>
&gt;&gt;     &gt;     /usr/local/lib/glusterfs/3.8.<wbr>12/rpc-transport/socket.so(+<wbr>0xad10)[0x7fa153a83d10]<br>
&gt;&gt;     &gt;     /usr/local/lib/glusterfs/3.8.<wbr>12/rpc-transport/socket.so(+<wbr>0xb2a7)[0x7fa153a842a7]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(+0x97ea9)[0x7fa15e68eea9]<br>
&gt;&gt;     &gt;     /usr/local/lib/libglusterfs.<wbr>so.0(+0x982c6)[0x7fa15e68f2c6]<br>
&gt;&gt;     &gt;     /lib64/libpthread.so.0(+<wbr>0x7dc5)[0x7fa15d475dc5]<br>
&gt;&gt;     &gt;     /lib64/libc.so.6(clone+0x6d)[<wbr>0x7fa15cdba73d]<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;     ______________________________<wbr>_________________<br>
&gt;&gt;     &gt;     Gluster-devel mailing list<br>
&gt;&gt;     &gt;     <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a> &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.<wbr>org</a>&gt; &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.<wbr>org</a> &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.<wbr>org</a>&gt;&gt;<br>
&gt;&gt;     &gt;     <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a> &lt;<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><wbr>&gt; &lt;<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a> &lt;<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><wbr>&gt;&gt;<br>
&gt;&gt;     &gt;<br>
&gt;&gt;     &gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>