<div dir="ltr"><div>Dear Vlad,</div><div><br></div><div>I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options:</div><div><br></div><div>$ gluster volume set homes group metadata-cache</div><div><br></div><div>I'm hoping Atin or Poornima can shed some light and squash this bug.<br></div><div><br></div><div>[0] <a href="https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md">https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md</a></div><div><br></div><div>Regards,<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Jan 26, 2018 at 6:10 AM Vlad Kopylov <<a href="mailto:vladkopy@gmail.com">vladkopy@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">can you please test parallel-readdir or readdir-ahead gives<br>
disconnects? so we know which to disable<br>
<br>
parallel-readdir doing magic ran on pdf from last year<br>
<a href="https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf" rel="noreferrer" target="_blank">https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf</a><br>
<br>
-v<br>
<br>
On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <<a href="mailto:alan.orth@gmail.com" target="_blank">alan.orth@gmail.com</a>> wrote:<br>
> By the way, on a slightly related note, I'm pretty sure either<br>
> parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We<br>
> are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6.<br>
><br>
> I updated my servers and clients to 3.12.4 and enabled these two options<br>
> after reading about them in the 3.10.0 and 3.11.0 release notes. In the days<br>
> after enabling these two options all of my clients kept getting disconnected<br>
> from the volume. The error upon attempting to list a directory or read a<br>
> file was "Transport endpoint is not connected", after which I would force<br>
> unmount the volume with `umount -fl /home` and remount it, only to have it<br>
> get disconnected again a few hours later.<br>
><br>
> Every time the volume disconnected I looked in the client mount log and only<br>
> found information such as:<br>
><br>
> [2018-01-24 05:52:27.695225] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1656:afr_log_selfheal] 2-homes-replicate-1:<br>
> Completed metadata selfheal on ed3fbafc-734b-41ca-ab30-216399fb9168.<br>
> sources=[0] sinks=1<br>
> [2018-01-24 05:52:27.700611] I [MSGID: 108026]<br>
> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]<br>
> 2-homes-replicate-1: performing metadata selfheal on<br>
> b6a53629-a831-4ee3-a35e-f47c04297aaa<br>
> [2018-01-24 05:52:27.703021] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1656:afr_log_selfheal] 2-homes-replicate-1:<br>
> Completed metadata selfheal on b6a53629-a831-4ee3-a35e-f47c04297aaa.<br>
> sources=[0] sinks=1<br>
><br>
> I enabled debug logging for that volume's client mount with `gluster volume<br>
> set homes diagnostics.client-log-level DEBUG` and then I saw this in the<br>
> client mount log the next time it disconnected:<br>
><br>
> [2018-01-24 08:55:19.138810] D [MSGID: 0] [io-threads.c:358:iot_schedule]<br>
> 0-homes-io-threads: LOOKUP scheduled as fast fop<br>
> [2018-01-24 08:55:19.138849] D [MSGID: 0] [dht-common.c:2711:dht_lookup]<br>
> 0-homes-dht: Calling fresh lookup for<br>
> /vchebii/revtrans/Hircus-XM_018067032.1.pep.align.fas on<br>
> homes-readdir-ahead-1<br>
> [2018-01-24 08:55:19.138928] D [MSGID: 0] [io-threads.c:358:iot_schedule]<br>
> 0-homes-io-threads: FSTAT scheduled as fast fop<br>
> [2018-01-24 08:55:19.138958] D [MSGID: 0] [afr-read-txn.c:220:afr_read_txn]<br>
> 0-homes-replicate-1: e6ee0427-b17d-4464-a738-e8ea70d77d95: generation now vs<br>
> cached: 2, 2<br>
> [2018-01-24 08:55:19.139187] D [MSGID: 0] [dht-common.c:2294:dht_lookup_cbk]<br>
> 0-homes-dht: fresh_lookup returned for<br>
> /vchebii/revtrans/Hircus-XM_018067032.1.pep.align.fas with op_ret 0<br>
> [2018-01-24 08:55:19.139200] D [MSGID: 0]<br>
> [dht-layout.c:873:dht_layout_preset] 0-homes-dht: file =<br>
> 00000000-0000-0000-0000-000000000000, subvol = homes-readdir-ahead-1<br>
> [2018-01-24 08:55:19.139257] D [MSGID: 0] [io-threads.c:358:iot_schedule]<br>
> 0-homes-io-threads: READDIRP scheduled as fast fop<br>
><br>
> On a hunch I disabled both parallel-readdir and readdir-ahead, which I had<br>
> only enabled a few days before, and now all of the clients are much more<br>
> stable, with zero disconnections in the days since I disabled those two<br>
> volume options.<br>
><br>
> Please take a look! Thanks,<br>
><br>
> On Wed, Jan 24, 2018 at 5:59 AM Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>> wrote:<br>
>><br>
>> Adding Poornima to take a look at it and comment.<br>
>><br>
>> On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth <<a href="mailto:alan.orth@gmail.com" target="_blank">alan.orth@gmail.com</a>> wrote:<br>
>>><br>
>>> Hello,<br>
>>><br>
>>> I saw that parallel-readdir was an experimental feature in GlusterFS<br>
>>> version 3.10.0, became stable in version 3.11.0, and is now recommended for<br>
>>> small file workloads in the Red Hat Gluster Storage Server documentation[2].<br>
>>> I've successfully enabled this on one of my volumes but I notice the<br>
>>> following in the client mount log:<br>
>>><br>
>>> [2018-01-23 10:24:24.048055] W [MSGID: 101174]<br>
>>> [graph.c:363:_log_if_unknown_option] 0-homes-readdir-ahead-1: option<br>
>>> 'parallel-readdir' is not recognized<br>
>>> [2018-01-23 10:24:24.048072] W [MSGID: 101174]<br>
>>> [graph.c:363:_log_if_unknown_option] 0-homes-readdir-ahead-0: option<br>
>>> 'parallel-readdir' is not recognized<br>
>>><br>
>>> The GlusterFS version on the client and server is 3.12.4. What is going<br>
>>> on?<br>
>>><br>
>>> [0]<br>
>>> <a href="https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.0.md" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.0.md</a><br>
>>> [1]<br>
>>> <a href="https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md</a><br>
>>> [2]<br>
>>> <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/small_file_performance_enhancements" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/small_file_performance_enhancements</a><br>
>>><br>
>>> Thank you,<br>
>>><br>
>>><br>
>>> --<br>
>>><br>
>>> Alan Orth<br>
>>> <a href="mailto:alan.orth@gmail.com" target="_blank">alan.orth@gmail.com</a><br>
>>> <a href="https://picturingjordan.com" rel="noreferrer" target="_blank">https://picturingjordan.com</a><br>
>>> <a href="https://englishbulgaria.net" rel="noreferrer" target="_blank">https://englishbulgaria.net</a><br>
>>> <a href="https://mjanja.ch" rel="noreferrer" target="_blank">https://mjanja.ch</a><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
> --<br>
><br>
> Alan Orth<br>
> <a href="mailto:alan.orth@gmail.com" target="_blank">alan.orth@gmail.com</a><br>
> <a href="https://picturingjordan.com" rel="noreferrer" target="_blank">https://picturingjordan.com</a><br>
> <a href="https://englishbulgaria.net" rel="noreferrer" target="_blank">https://englishbulgaria.net</a><br>
> <a href="https://mjanja.ch" rel="noreferrer" target="_blank">https://mjanja.ch</a><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><p dir="ltr">Alan Orth<br>
<a href="mailto:alan.orth@gmail.com">alan.orth@gmail.com</a><br>
<a href="https://picturingjordan.com">https://picturingjordan.com</a><br>
<a href="https://englishbulgaria.net">https://englishbulgaria.net</a><br>
<a href="https://mjanja.ch">https://mjanja.ch</a></p>
</div>