<div dir="ltr"><div>Hello Vijay,</div><div><br></div><div>how can I create such a core file? Or will it be created automatically if a gluster process crashes? <br></div><div>Maybe you can give me a hint and will try to get a backtrace.</div><div><br></div><div>Unfortunately this bug is not easy to reproduce because it appears only sometimes.</div><div><br></div><div>Regards</div><div>David Spisla<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Mo., 6. Mai 2019 um 19:48 Uhr schrieb Vijay Bellur &lt;<a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a>&gt;:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Thank you for the report, David. Do you have core files available on any of the servers? If yes, would it be possible for you to provide a backtrace.<div><br></div><div>Regards,</div><div>Vijay</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, May 6, 2019 at 3:09 AM David Spisla &lt;<a href="mailto:spisla80@gmail.com" target="_blank">spisla80@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hello folks,</div><div><br></div><div>we have a client application (runs on Win10) which does some FOPs on a gluster volume which is accessed by SMB. <br></div><div><br></div><div><b>Scenario 1</b> is a<span lang="en"> READ Operation which reads all files successively and checks if the files data was correctly copied. While doing this, all brick processes crashes and in the logs one have this crash report on every brick log:</span></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><span lang="en">
</span><pre>CTX_ID:a0359502-2c76-4fee-8cb9-365679dc690e-GRAPH_ID:0-PID:32934-HOST:XX-XXXXX-XX-XX-PC_NAME:shortterm-client-2-RECON_NO:-0, gfid: 00000000-0000-0000-0000-000000000001, req(uid:2000,gid:2000,perm:1,ngrps:1), ctx(uid:0,gid:0,in-groups:0,perm:700,updated-fop:LOOKUP, acl:-) [Permission denied]
pending frames:
frame : type(0) op(27)
frame : type(0) op(40)
patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a>
signal received: 11
time of crash: 
2019-04-16 08:32:21
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.5
/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7f9a5bd4d64c]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7f9a5bd57d26]
/lib64/libc.so.6(+0x361a0)[0x7f9a5af141a0]
/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0xb910)[0x7f9a4ef0e910]
/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x8118)[0x7f9a4ef0b118]
/usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x128d6)[0x7f9a4f1278d6]
/usr/lib64/glusterfs/5.5/xlator/features/access-control.so(+0x575b)[0x7f9a4f35975b]
/usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0xb3b3)[0x7f9a4f1203b3]
/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x85b2)[0x7f9a4ef0b5b2]
/usr/lib64/libglusterfs.so.0(default_lookup+0xbc)[0x7f9a5bdd7b6c]
/usr/lib64/libglusterfs.so.0(default_lookup+0xbc)[0x7f9a5bdd7b6c]
/usr/lib64/glusterfs/5.5/xlator/features/upcall.so(+0xf548)[0x7f9a4e8cf548]
/usr/lib64/libglusterfs.so.0(default_lookup_resume+0x1e2)[0x7f9a5bdefc22]
/usr/lib64/libglusterfs.so.0(call_resume+0x75)[0x7f9a5bd733a5]
/usr/lib64/glusterfs/5.5/xlator/performance/io-threads.so(+0x6088)[0x7f9a4e6b7088]
/lib64/libpthread.so.0(+0x7569)[0x7f9a5b29f569]
/lib64/libc.so.6(clone+0x3f)[0x7f9a5afd69af]</pre>

</div></blockquote><div><b>Scenario 2 </b>The application just SET Read-Only on each file sucessively. After the 70th file was set, all the bricks crashes and again, one can read this crash report in every brick log:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>  


















<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">[2019-05-02 07:43:39.953591] I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-longterm-access-control:
client:
CTX_ID:21aa9c75-3a5f-41f9-925b-48e4c80bd24a-GRAPH_ID:0-PID:16325-HOST:XXX-X-X-XXX-PC_NAME:longterm-client-0-RECON_NO:-0,
gfid: 00000000-0000-0000-0000-000000000001,
req(uid:2000,gid:2000,perm:1,ngrps:1),
ctx(uid:0,gid:0,in-groups:0,perm:700,updated-fop:LOOKUP, acl:-) [Permission
denied]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">pending frames:<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">frame : type(0) op(27)<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a><span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">signal received: 11<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">time of crash: <span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">2019-05-02 07:43:39<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">configuration details:<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">argp 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">backtrace 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">dlfcn 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">libpthread 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">llistxattr 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">setfsid 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">spinlock 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">epoll.h 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">xattr.h 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">st_atim.tv_nsec 1<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">package-string: glusterfs 5.5<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fbb3f0b364c]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fbb3f0bdd26]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/lib64/libc.so.6(+0x361e0)[0x7fbb3e27a1e0]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0xb910)[0x7fbb32257910]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x8118)[0x7fbb32254118]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0x128d6)[0x7fbb324708d6]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/access-control.so(+0x575b)[0x7fbb326a275b]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/locks.so(+0xb3b3)[0x7fbb324693b3]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/worm.so(+0x85b2)[0x7fbb322545b2]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(default_lookup+0xbc)[0x7fbb3f13db6c]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(default_lookup+0xbc)[0x7fbb3f13db6c]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/features/upcall.so(+0xf548)[0x7fbb31c18548]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(default_lookup_resume+0x1e2)[0x7fbb3f155c22]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/libglusterfs.so.0(call_resume+0x75)[0x7fbb3f0d93a5]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/usr/lib64/glusterfs/5.5/xlator/performance/io-threads.so(+0x6088)[0x7fbb31a00088]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/lib64/libpthread.so.0(+0x7569)[0x7fbb3e605569]<span></span></p>

<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif">/lib64/libc.so.6(clone+0x3f)[0x7fbb3e33c9ef]<span></span></p>





</div></blockquote><div><span lang="en"><br></span></div><div><span lang="en">This happens on a 3-Node Gluster v5.5 Cluster on two different volumes. But both volumes has the same settings:</span></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><span lang="en">Volume Name: shortterm<br>Type: Replicate<br>Volume ID: 5307e5c5-e8a1-493a-a846-342fb0195dee<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x 3 = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: fs-xxxxx-c1-n1:/gluster/brick4/glusterbrick<br>Brick2: fs-xxxxx-c1-n2:/gluster/brick4/glusterbrick<br>Brick3: fs-xxxxx-c1-n3:/gluster/brick4/glusterbrick<br>Options Reconfigured:<br>storage.reserve: 1<br>performance.client-io-threads: off<br>nfs.disable: on<br>transport.address-family: inet<br>user.smb: disable<br>features.read-only: off<br>features.worm: off<br>features.worm-file-level: on<br>features.retention-mode: enterprise<br>features.default-retention-period: 120<br>network.ping-timeout: 10<br>features.cache-invalidation: on<br>features.cache-invalidation-timeout: 600<br>performance.nl-cache: on<br>performance.nl-cache-timeout: 600<br>client.event-threads: 32<br>server.event-threads: 32<br>cluster.lookup-optimize: on<br>performance.stat-prefetch: on<br>performance.cache-invalidation: on<br>performance.md-cache-timeout: 600<br>performance.cache-samba-metadata: on<br>performance.cache-ima-xattrs: on<br>performance.io-thread-count: 64<br>cluster.use-compound-fops: on<br>performance.cache-size: 512MB<br>performance.cache-refresh-timeout: 10<br>performance.read-ahead: off<br>performance.write-behind-window-size: 4MB<br>performance.write-behind: on<br>storage.build-pgfid: on<br>features.utime: on<br>storage.ctime: on<br>cluster.quorum-type: fixed<br>cluster.quorum-count: 2<br>features.bitrot: on<br>features.scrub: Active<br>features.scrub-freq: daily<br>cluster.enable-shared-storage: enable<br><br></span></div></blockquote><div><br></div><div>Why can this happen to all Brick processes? I don&#39;t understand the crash report. The FOPs are nothing special and after restart brick processes everything works fine and our application was succeed.</div><div><br></div><div>Regards</div><div>David Spisla<br></div><div><span lang="en"><br></span></div><div><span lang="en"><br></span></div><div><span lang="en"><br></span>

</div></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>
</blockquote></div>