[Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

Raghavendra Gowdappa rgowdapp at redhat.com
Thu Aug 2 16:51:35 UTC 2018


On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar <
khiremat at redhat.com> wrote:

> I am facing different issue in softserve machines. The fuse mount itself
> is failing.
> I tried day before yesterday to debug geo-rep failures. I discussed with
> Raghu,
> but could not root cause it.
>

Where can I find the complete client logs for this?

So none of the tests were passing. It happened on
> both machine instances I tried.
>
> ------------------------
> [2018-07-31 10:41:49.288117] D [fuse-bridge.c:5407:notify] 0-fuse: got
> event 6 on graph 0
> [2018-07-31 10:41:49.289427] D [fuse-bridge.c:4990:fuse_get_mount_status]
> 0-fuse: mount status is 0
> [2018-07-31 10:41:49.289555] D [fuse-bridge.c:4256:fuse_init]
> 0-glusterfs-fuse: Detected support for FUSE_AUTO_INVAL_DATA. Enabling
> fopen_keep_cache automatically.
> [2018-07-31 10:41:49.289591] T [fuse-bridge.c:278:send_fuse_iov]
> 0-glusterfs-fuse: writev() result 40/40
> [2018-07-31 10:41:49.289610] I [fuse-bridge.c:4314:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
> 7.22
> [2018-07-31 10:41:49.289627] I [fuse-bridge.c:4948:fuse_graph_sync]
> 0-fuse: switched to graph 0
> [2018-07-31 10:41:49.289696] T [MSGID: 0] [syncop.c:1261:syncop_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from fuse to
> meta-autoload
> [2018-07-31 10:41:49.289743] T [MSGID: 0] [defaults.c:2716:default_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from meta-autoload to
> master
> [2018-07-31 10:41:49.289787] T [MSGID: 0] [io-stats.c:2788:io_stats_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from master to
> master-md-cache
> [2018-07-31 10:41:49.289833] T [MSGID: 0] [md-cache.c:513:mdc_inode_iatt_get]
> 0-md-cache: mdc_inode_ctx_get failed (00000000-0000-0000-0000-
> 000000000001)
> [2018-07-31 10:41:49.289923] T [MSGID: 0] [md-cache.c:1200:mdc_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-md-cache
> to master-open-behind
> [2018-07-31 10:41:49.289946] T [MSGID: 0] [defaults.c:2716:default_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from
> master-open-behind to master-quick-read
> [2018-07-31 10:41:49.289973] T [MSGID: 0] [quick-read.c:556:qr_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from
> master-quick-read to master-io-cache
> [2018-07-31 10:41:49.290002] T [MSGID: 0] [io-cache.c:298:ioc_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-io-cache
> to master-readdir-ahead
> [2018-07-31 10:41:49.290034] T [MSGID: 0] [defaults.c:2716:default_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from
> master-readdir-ahead to master-read-ahead
> [2018-07-31 10:41:49.290052] T [MSGID: 0] [defaults.c:2716:default_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from
> master-read-ahead to master-write-behind
> [2018-07-31 10:41:49.290077] T [MSGID: 0] [write-behind.c:2439:wb_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from
> master-write-behind to master-dht
> [2018-07-31 10:41:49.290156] D [MSGID: 0] [dht-common.c:3674:dht_do_fresh_lookup]
> 0-master-dht: /: no subvolume in layout for path, checking on all the
> subvols to see if it is a directory
> [2018-07-31 10:41:49.290180] D [MSGID: 0] [dht-common.c:3688:dht_do_fresh_lookup]
> 0-master-dht: /: Found null hashed subvol. Calling lookup on all nodes.
> [2018-07-31 10:41:49.290199] T [MSGID: 0] [dht-common.c:3695:dht_do_fresh_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-dht to
> master-replicate-0
> [2018-07-31 10:41:49.290245] I [MSGID: 108006]
> [afr-common.c:5582:afr_local_init] 0-master-replicate-0: no subvolumes up
> [2018-07-31 10:41:49.290291] D [MSGID: 0] [afr-common.c:3212:afr_discover]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-replicate-0 returned
> -1 error: Transport endpoint is not conne
> cted [Transport endpoint is not connected]
> [2018-07-31 10:41:49.290323] D [MSGID: 0] [dht-common.c:1391:dht_lookup_dir_cbk]
> 0-master-dht: lookup of / on master-replicate-0 returned error [Transport
> endpoint is not connected]
> [2018-07-31 10:41:49.290350] T [MSGID: 0] [dht-common.c:3695:dht_do_fresh_lookup]
> 0-stack-trace: stack-address: 0x7f36e4001058, winding from master-dht to
> master-replicate-1
> [2018-07-31 10:41:49.290381] I [MSGID: 108006]
> [afr-common.c:5582:afr_local_init] 0-master-replicate-1: no subvolumes up
> [2018-07-31 10:41:49.290403] D [MSGID: 0] [afr-common.c:3212:afr_discover]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-replicate-1 returned
> -1 error: Transport endpoint is not connected [Transport endpoint is not
> connected]
> [2018-07-31 10:41:49.290427] D [MSGID: 0] [dht-common.c:1391:dht_lookup_dir_cbk]
> 0-master-dht: lookup of / on master-replicate-1 returned error [Transport
> endpoint is not connected]
> [2018-07-31 10:41:49.290452] D [MSGID: 0] [dht-common.c:1574:dht_lookup_dir_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-dht returned -1 error:
> Transport endpoint is not connected [Transport endpoint is not connected]
> [2018-07-31 10:41:49.290477] D [MSGID: 0] [write-behind.c:2393:wb_lookup_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-write-behind returned
> -1 error: Transport endpoint is not connected [Transport endpoint is not
> connected]
> [2018-07-31 10:41:49.290504] D [MSGID: 0] [io-cache.c:268:ioc_lookup_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-io-cache returned -1
> error: Transport endpoint is not connected [Transport endpoint is not
> connected]
> [2018-07-31 10:41:49.290530] D [MSGID: 0] [quick-read.c:515:qr_lookup_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-quick-read returned -1
> error: Transport endpoint is not connected [Transport endpoint is not
> connected]
> [2018-07-31 10:41:49.290554] D [MSGID: 0] [md-cache.c:1130:mdc_lookup_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master-md-cache returned -1
> error: Transport endpoint is not connected [Transport endpoint is not
> connected]
> [2018-07-31 10:41:49.290581] D [MSGID: 0] [io-stats.c:2276:io_stats_lookup_cbk]
> 0-stack-trace: stack-address: 0x7f36e4001058, master returned -1 error:
> Transport endpoint is not connected [Transport endpoint is not connected]
> [2018-07-31 10:41:49.290626] E [fuse-bridge.c:4382:fuse_first_lookup]
> 0-fuse: first lookup on root failed (Transport endpoint is not connected)
> ---------------------------------------------
>
> On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu <nigelb at redhat.com> wrote:
>
>> On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar <
>> khiremat at redhat.com> wrote:
>>
>>> Don't know, something to do with perf xlators I suppose. It's not
>>> repdroduced on my local system with brick-mux enabled as well. But it's
>>> happening on Xavis' system.
>>>
>>> Xavi,
>>> Could you try with the patch [1] and let me know whether it fixes the
>>> issue.
>>>
>>> [1] https://review.gluster.org/#/c/20619/1
>>>
>>
>> If you cannot reproduce it on your laptop, why don't you request a
>> machine from softserve[1] and try it out?
>>
>> [1]: https://github.com/gluster/softserve/wiki/Running-Regression
>> s-on-clean-Centos-7-machine
>>
>> --
>> nigelb
>>
>
>
>
> --
> Thanks and Regards,
> Kotresh H R
>
> _______________________________________________
> maintainers mailing list
> maintainers at gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180802/e1cad5cc/attachment-0001.html>


More information about the Gluster-devel mailing list