<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar <span dir="ltr"><<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>I am facing different issue in softserve machines. The fuse mount itself is failing.<br></div>I tried day before yesterday to debug geo-rep failures. I discussed with Raghu,<br></div>but could not root cause it. </div></div></blockquote><div><br></div><div>Where can I find the complete client logs for this?</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>So none of the tests were passing. It happened on</div><div>both machine instances I tried.</div><div><br></div><div>------------------------</div><div>[2018-07-31 10:41:49.288117] D [fuse-bridge.c:5407:notify] 0-fuse: got event 6 on graph 0<br>[2018-07-31 10:41:49.289427] D [fuse-bridge.c:4990:fuse_get_<wbr>mount_status] 0-fuse: mount status is 0<br>[2018-07-31 10:41:49.289555] D [fuse-bridge.c:4256:fuse_init] 0-glusterfs-fuse: Detected support for FUSE_AUTO_INVAL_DATA. Enabling fopen_keep_cache automatically.<br>[2018-07-31 10:41:49.289591] T [fuse-bridge.c:278:send_fuse_<wbr>iov] 0-glusterfs-fuse: writev() result 40/40 <br>[2018-07-31 10:41:49.289610] I [fuse-bridge.c:4314:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22<br>[2018-07-31 10:41:49.289627] I [fuse-bridge.c:4948:fuse_<wbr>graph_sync] 0-fuse: switched to graph 0<br>[2018-07-31 10:41:49.289696] T [MSGID: 0] [syncop.c:1261:syncop_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from fuse to meta-autoload<br>[2018-07-31 10:41:49.289743] T [MSGID: 0] [defaults.c:2716:default_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from meta-autoload to master<br>[2018-07-31 10:41:49.289787] T [MSGID: 0] [io-stats.c:2788:io_stats_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master to master-md-cache<br>[2018-07-31 10:41:49.289833] T [MSGID: 0] [md-cache.c:513:mdc_inode_<wbr>iatt_get] 0-md-cache: mdc_inode_ctx_get failed (00000000-0000-0000-0000-<wbr>000000000001)<br>[2018-07-31 10:41:49.289923] T [MSGID: 0] [md-cache.c:1200:mdc_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-md-cache to master-open-behind<br>[2018-07-31 10:41:49.289946] T [MSGID: 0] [defaults.c:2716:default_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-open-behind to master-quick-read<br>[2018-07-31 10:41:49.289973] T [MSGID: 0] [quick-read.c:556:qr_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-quick-read to master-io-cache<br>[2018-07-31 10:41:49.290002] T [MSGID: 0] [io-cache.c:298:ioc_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-io-cache to master-readdir-ahead<br>[2018-07-31 10:41:49.290034] T [MSGID: 0] [defaults.c:2716:default_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-readdir-ahead to master-read-ahead<br>[2018-07-31 10:41:49.290052] T [MSGID: 0] [defaults.c:2716:default_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-read-ahead to master-write-behind<br>[2018-07-31 10:41:49.290077] T [MSGID: 0] [write-behind.c:2439:wb_<wbr>lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-write-behind to master-dht<br>[2018-07-31 10:41:49.290156] D [MSGID: 0] [dht-common.c:3674:dht_do_<wbr>fresh_lookup] 0-master-dht: /: no subvolume in layout for path, checking on all the subvols to see if it is a directory<br>[2018-07-31 10:41:49.290180] D [MSGID: 0] [dht-common.c:3688:dht_do_<wbr>fresh_lookup] 0-master-dht: /: Found null hashed subvol. Calling lookup on all nodes.<br>[2018-07-31 10:41:49.290199] T [MSGID: 0] [dht-common.c:3695:dht_do_<wbr>fresh_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-dht to master-replicate-0<br>[2018-07-31 10:41:49.290245] I [MSGID: 108006] [afr-common.c:5582:afr_local_<wbr>init] 0-master-replicate-0: no subvolumes up<br>[2018-07-31 10:41:49.290291] D [MSGID: 0] [afr-common.c:3212:afr_<wbr>discover] 0-stack-trace: stack-address: 0x7f36e4001058, master-replicate-0 returned -1 error: Transport endpoint is not conne<br>cted [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290323] D [MSGID: 0] [dht-common.c:1391:dht_lookup_<wbr>dir_cbk] 0-master-dht: lookup of / on master-replicate-0 returned error [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290350] T [MSGID: 0] [dht-common.c:3695:dht_do_<wbr>fresh_lookup] 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-dht to master-replicate-1<br>[2018-07-31 10:41:49.290381] I [MSGID: 108006] [afr-common.c:5582:afr_local_<wbr>init] 0-master-replicate-1: no subvolumes up<br>[2018-07-31 10:41:49.290403] D [MSGID: 0] [afr-common.c:3212:afr_<wbr>discover] 0-stack-trace: stack-address: 0x7f36e4001058, master-replicate-1 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290427] D [MSGID: 0] [dht-common.c:1391:dht_lookup_<wbr>dir_cbk] 0-master-dht: lookup of / on master-replicate-1 returned error [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290452] D [MSGID: 0] [dht-common.c:1574:dht_lookup_<wbr>dir_cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master-dht returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290477] D [MSGID: 0] [write-behind.c:2393:wb_<wbr>lookup_cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master-write-behind returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290504] D [MSGID: 0] [io-cache.c:268:ioc_lookup_<wbr>cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master-io-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290530] D [MSGID: 0] [quick-read.c:515:qr_lookup_<wbr>cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master-quick-read returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290554] D [MSGID: 0] [md-cache.c:1130:mdc_lookup_<wbr>cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master-md-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290581] D [MSGID: 0] [io-stats.c:2276:io_stats_<wbr>lookup_cbk] 0-stack-trace: stack-address: 0x7f36e4001058, master returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]<br>[2018-07-31 10:41:49.290626] E [fuse-bridge.c:4382:fuse_<wbr>first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)<br>------------------------------<wbr>---------------</div></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu <span dir="ltr"><<a href="mailto:nigelb@redhat.com" target="_blank">nigelb@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><span><div dir="ltr">On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar <<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Don't know, something to do with perf xlators I suppose. It's not repdroduced on my local system with brick-mux enabled as well. But it's happening on Xavis' system.</div><div><br></div><div>Xavi,</div><div>Could you try with the patch [1] and let me know whether it fixes the issue.<br></div><div><br></div><div>[1] <a href="https://review.gluster.org/#/c/20619/1" target="_blank">https://review.gluster.org/#/c<wbr>/20619/1</a><br></div></div></div></div></blockquote><div><br></div></span><div>If you cannot reproduce it on your laptop, why don't you request a machine from softserve[1] and try it out? <br></div><div><br></div><div>[1]: <a href="https://github.com/gluster/softserve/wiki/Running-Regressions-on-clean-Centos-7-machine" target="_blank">https://github.com/gluster/sof<wbr>tserve/wiki/Running-Regression<wbr>s-on-clean-Centos-7-machine</a><span class="m_-8818361969938167173HOEnZb"><font color="#888888"><br></font></span></div></div><span class="m_-8818361969938167173HOEnZb"><font color="#888888"><br>-- <br><div dir="ltr" class="m_-8818361969938167173m_6613208894226703755gmail_signature"><div dir="ltr">nigelb<br></div></div></font></span></div>
</blockquote></div><br><br clear="all"><br></div></div><span class="">-- <br><div class="m_-8818361969938167173gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</span></div>
<br>______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/maintainers</a><br>
<br></blockquote></div><br></div></div>