<div dir="ltr"><div>Hi,</div><div><br></div><div>As you have mentioned client/server version in thread it shows package version are different on both(client,server).</div><div>We would recommend you to upgrade both servers and clients to rhs-3.10.1.</div><div>If it is not possible to upgrade both(client,server) then in this case it is required to upgrade client only.</div><div><br></div><div>ThanksĀ </div><div>Mohit Agrawal</div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 31, 2017 at 2:27 PM, Mohit Agrawal <span dir="ltr"><<a href="mailto:moagrawa@redhat.com" target="_blank">moagrawa@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><pre><font color="#000000"><span style="white-space:pre-wrap">Hi,
As per attached glusterdump/stackdump it seems it is a known issue (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1372211" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1372211</a>) and issue is already fixed from the patch (<a href="https://review.gluster.org/#/c/15380/" target="_blank">https://review.gluster.org/#/<wbr>c/15380/</a>).
The issue is happened in this case
Assume a file is opened with fd1 and fd2.
1. some WRITE opto fd1 got error, they were add back to 'todo' queue
because of those error.
2. fd2 closed, a FLUSH op is send to write-behind.
3. FLUSH can not be unwind because it's not a legal waiter for those
failed write(as func __wb_request_waiting_on() say). and those failed
WRITE also can not be ended if fd1 is not closed. fd2 stuck in close
syscall.
As per statedump it also shows flush op fd is not same as write op fd.
Kindly upgrade the package on 3.10.1 and share the result.
Thanks
Mohit Agrawal<br></span></font></pre><pre style="white-space:pre-wrap;color:rgb(0,0,0)"><br></pre><pre style="white-space:pre-wrap;color:rgb(0,0,0)">On Fri, Mar 31, 2017 at 12:29 PM, Amar Tumballi <<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">atumball at redhat.com</a>> wrote:
><i> Hi Alvin,
</i>><i>
</i>><i> Thanks for the dump output. It helped a bit.
</i>><i>
</i>><i> For now, recommend turning off open-behind and read-ahead performance
</i>><i> translators for you to get rid of this situation, As I noticed hung FLUSH
</i>><i> operations from these translators.
</i>><i>
</i>
Looks like I gave wrong advise by looking at below snippet:
[global.callpool.stack.61]
><i> stack=0x7f6c6f628f04
</i>><i> uid=48
</i>><i> gid=48
</i>><i> pid=11077
</i>><i> unique=10048797
</i>><i> lk-owner=a73ae5bdb5fcd0d2
</i>><i> op=FLUSH
</i>><i> type=1
</i>><i> cnt=5
</i>><i>
</i>><i> [global.callpool.stack.61.<wbr>frame.1]
</i>><i> frame=0x7f6c6f793d88
</i>><i> ref_count=0
</i>><i> translator=edocs-production-<wbr>write-behind
</i>><i> complete=0
</i>><i> parent=edocs-production-read-<wbr>ahead
</i>><i> wind_from=ra_flush
</i>><i> wind_to=FIRST_CHILD (this)->fops->flush
</i>><i> unwind_to=ra_flush_cbk
</i>><i>
</i>><i> [global.callpool.stack.61.<wbr>frame.2]
</i>><i> frame=0x7f6c6f796c90
</i>><i> ref_count=1
</i>><i> translator=edocs-production-<wbr>read-ahead
</i>><i> complete=0
</i>><i> parent=edocs-production-open-<wbr>behind
</i>><i> wind_from=default_flush_resume
</i>><i> wind_to=FIRST_CHILD(this)-><wbr>fops->flush
</i>><i> unwind_to=default_flush_cbk
</i>><i>
</i>><i> [global.callpool.stack.61.<wbr>frame.3]
</i>><i> frame=0x7f6c6f79b724
</i>><i> ref_count=1
</i>><i> translator=edocs-production-<wbr>open-behind
</i>><i> complete=0
</i>><i> parent=edocs-production
</i>><i> wind_from=io_stats_flush
</i>><i> wind_to=FIRST_CHILD(this)-><wbr>fops->flush
</i>><i> unwind_to=io_stats_flush_cbk
</i>><i>
</i>><i> [global.callpool.stack.61.<wbr>frame.4]
</i>><i> frame=0x7f6c6f79b474
</i>><i> ref_count=1
</i>><i> translator=edocs-production
</i>><i> complete=0
</i>><i> parent=fuse
</i>><i> wind_from=fuse_flush_resume
</i>><i> wind_to=FIRST_CHILD(this)-><wbr>fops->flush
</i>><i> unwind_to=fuse_err_cbk
</i>><i>
</i>><i> [global.callpool.stack.61.<wbr>frame.5]
</i>><i> frame=0x7f6c6f796684
</i>><i> ref_count=1
</i>><i> translator=fuse
</i>><i> complete=0
</i>><i>
</i>
Mos probably, issue is with write-behind's flush. So please turn off
write-behind and test. If you don't have any hung httpd processes, please
let us know.
-Amar
><i> -Amar
</i>><i>
</i>><i> On Wed, Mar 29, 2017 at 6:56 AM, Alvin Starr <<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">alvin at netvel.net</a>> wrote:
</i>><i>
</i>>><i> We are running gluster 3.8.9-1 on Centos 7.3.1611 for the servers and on
</i>>><i> the clients 3.7.11-2 on Centos 6.8
</i>>><i>
</i>>><i> We are seeing httpd processes hang in fuse_request_send or sync_page.
</i>>><i>
</i>>><i> These calls are from PHP 5.3.3-48 scripts
</i>>><i>
</i>>><i> I am attaching a tgz file that contains the process dump from glusterfsd
</i>>><i> and the hung pids along with the offending pid's stacks from
</i>>><i> /proc/{pid}/stack.
</i>>><i>
</i>>><i> This has been a low level annoyance for a while but it has become a much
</i>>><i> bigger issue because the number of hung processes went from a few a week to
</i>>><i> a few hundred a day.
</i>>><i>
</i>>><i>
</i>>><i> --
</i>>><i> Alvin Starr || voice: (905)513-7688
</i>>><i> Netvel Inc. || Cell: (416)806-0133
</i>>><i> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">alvin at netvel.net</a> ||
</i>>><i>
</i>>><i>
</i>>><i> ______________________________<wbr>_________________
</i>>><i> Gluster-users mailing list
</i>>><i> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">Gluster-users at gluster.org</a>
</i>>><i> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
</i>>><i>
</i>><i>
</i>><span class="HOEnZb"><font color="#888888"><i>
</i>><i>
</i>><i> --
</i>><i> Amar Tumballi (amarts)
</i>><i>
</i>
-- </font></span></pre></div>
</blockquote></div><br></div></div>