<div dir="ltr">Stefan, <div><br></div><div>We'll have to let somebody else chime in. I don't work on this project, just another user, enthusiast and I've spent, still spending much time tuning my own RDMA gluster configuration. In short, I won't have an answer for you. If nobody can answer, I'd suggest filing a bug, that way it can be tracked and reviewed by developers. </div><div><br></div><div>- Dan</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 30, 2018 at 6:34 AM, Stefan Solbrig <span dir="ltr"><<a href="mailto:stefan.solbrig@ur.de" target="_blank">stefan.solbrig@ur.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear Dan,<br>
<br>
thanks for the quick reply! <br>
<br>
I actually tried restarting all processes (and even rebooting all servers), but the error persists. I can also confirm that all birck processes are running. My volume is a distrubute-only volume (not dispersed, no sharding). <br>
<br>
I also tried mounting with use_readdirp=no, because the error seems to be connected to readdirp, but this option does not change anything. <br>
<br>
I found to options I might try: (gluster volume get myvolumename all | grep readdirp )<br>
performance.force-readdirp true<br>
dht.force-readdirp on<br>
Can I turn off these safely? (or what precisely do they do?)<br>
<br>
I also assured that all glusterd processes have unlimited locked memory. <br>
<br>
Just to state it clearly: I do _not_ see any data corruption. Just the directory listings do not work (in very rare cases) with rdma transport:<br>
"ls" shows only a part of the files.<br>
but then I do:<br>
stat /path/to/known/filename<br>
it succeeds, and even<br>
md5sum /path/to/known/filename/that/<wbr>does/not/get/listed/with/ls<br>
yields the correct result.<br>
<br>
best wishes,<br>
Stefan<br>
<div class="HOEnZb"><div class="h5"><br>
> Am 30.05.2018 um 03:00 schrieb Dan Lavu <<a href="mailto:dan@redhat.com">dan@redhat.com</a>>:<br>
> <br>
> Forgot to mention, sometimes I have to do force start other volumes as well, its hard to determine which brick process is locked up from the logs. <br>
> <br>
> <br>
> Status of volume: rhev_vms_primary<br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> Brick spidey.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49157 Y 15666<br>
> Brick deadpool.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49156 Y 2542 <br>
> Brick groot.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49156 Y 2180 <br>
> Self-heal Daemon on localhost N/A N/A N N/A << Brick process is not running on any node.<br>
> Self-heal Daemon on spidey.ib.runlevelone.lan N/A N/A N N/A <br>
> Self-heal Daemon on groot.ib.runlevelone.lan N/A N/A N N/A <br>
> <br>
> Task Status of Volume rhev_vms_primary<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> There are no active volume tasks<br>
> <br>
> <br>
> 3081 gluster volume start rhev_vms_noshards force<br>
> 3082 gluster volume status<br>
> 3083 gluster volume start rhev_vms_primary force<br>
> 3084 gluster volume status<br>
> 3085 gluster volume start rhev_vms_primary rhev_vms<br>
> 3086 gluster volume start rhev_vms_primary rhev_vms force<br>
> <br>
> Status of volume: rhev_vms_primary<br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------<wbr>------------------------------<wbr>------------------<br>
> Brick spidey.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49157 Y 15666<br>
> Brick deadpool.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49156 Y 2542 <br>
> Brick groot.ib.runlevelone.lan:/<wbr>gluster/brick/rhev_vms_primary 0 49156 Y 2180 <br>
> Self-heal Daemon on localhost N/A N/A Y 8343 <br>
> Self-heal Daemon on spidey.ib.runlevelone.lan N/A N/A Y 22381<br>
> Self-heal Daemon on groot.ib.runlevelone.lan N/A N/A Y 20633<br>
> <br>
> Finally..<br>
> <br>
> Dan<br>
> <br>
> <br>
> <br>
> <br>
> On Tue, May 29, 2018 at 8:47 PM, Dan Lavu <<a href="mailto:dan@redhat.com">dan@redhat.com</a>> wrote:<br>
> Stefan, <br>
> <br>
> Sounds like a brick process is not running. I have notice some strangeness in my lab when using RDMA, I often have to forcibly restart the brick process, often as in every single time I do a major operation, add a new volume, remove a volume, stop a volume, etc.<br>
> <br>
> gluster volume status <vol> <br>
> <br>
> Does any of the self heal daemons show N/A? If that's the case, try forcing a restart on the volume. <br>
> <br>
> gluster volume start <vol> force<br>
> <br>
> This will also explain why your volumes aren't being replicated properly. <br>
> <br>
> On Tue, May 29, 2018 at 5:20 PM, Stefan Solbrig <<a href="mailto:stefan.solbrig@ur.de">stefan.solbrig@ur.de</a>> wrote:<br>
> Dear all,<br>
> <br>
> I faced a problem with a glusterfs volume (pure distributed, _not_ dispersed) over RDMA transport. One user had a directory with a large number of files (50,000 files) and just doing an "ls" in this directory yields a "Transport endpoint not connected" error. The effect is, that "ls" only shows some files, but not all. <br>
> <br>
> The respective log file shows this error message:<br>
> <br>
> [2018-05-20 20:38:25.114978] W [MSGID: 114031] [client-rpc-fops.c:2578:<wbr>client3_3_readdirp_cbk] 0-glurch-client-0: remote operation failed [Transport endpoint is not connected]<br>
> [2018-05-20 20:38:27.732796] W [MSGID: 103046] [rdma.c:4089:gf_rdma_process_<wbr>recv] 0-rpc-transport/rdma: peer (<a href="http://10.100.245.18:49153" rel="noreferrer" target="_blank">10.100.245.18:49153</a>), couldn't encode or decode the msg properly or write chunks were not provided for replies that were bigger than RDMA_INLINE_THRESHOLD (2048)<br>
> [2018-05-20 20:38:27.732844] W [MSGID: 114031] [client-rpc-fops.c:2578:<wbr>client3_3_readdirp_cbk] 0-glurch-client-3: remote operation failed [Transport endpoint is not connected]<br>
> [2018-05-20 20:38:27.733181] W [fuse-bridge.c:2897:fuse_<wbr>readdirp_cbk] 0-glusterfs-fuse: 72882828: READDIRP => -1 (Transport endpoint is not connected)<br>
> <br>
> I already set the memlock limit for glusterd to unlimited, but the problem persists. <br>
> <br>
> Only going from RDMA transport to TCP transport solved the problem. (I'm running the volume now in mixed mode, config.transport=tcp,rdma). Mounting with transport=rdma shows this error, mouting with transport=tcp is fine.<br>
> <br>
> however, this problem does not arise on all large directories, not on all. I didn't recognize a pattern yet. <br>
> <br>
> I'm using glusterfs v3.12.6 on the servers, QDR Infiniband HCAs . <br>
> <br>
> Is this a known issue with RDMA transport?<br>
> <br>
> best wishes,<br>
> Stefan<br>
> <br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> <br>
> <br>
<br>
</div></div></blockquote></div><br></div>