<div dir="ltr"><div dir="ltr"><div>Previous logs related to client not bricks, below are the brick logs</div><div><br></div><div>[2019-01-12 12:25:25.893485]:++++++++++ G_LOG:./tests/bugs/ec/bug-1236065.t: TEST: 68 rm -f 0.o 10.o 11.o 12.o 13.o 14.o 15.o 16.o 17.o 18.o 19.o 1.o 2.o 3.o 4.o 5.o 6.o 7.o 8.o 9.o ++++++++++</div><div>The message "I [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr] 0-dict: key 'trusted.ec.size' would not be sent on wire in the future [Invalid argument]" repeated 199 times between [2019-01-12 12:25:25.283989] and [2019-01-12 12:25:25.899532]</div><div>[2019-01-12 12:25:25.903375] E [MSGID: 113001] [posix-inode-fd-ops.c:4617:_posix_handle_xattr_keyvalue_pair] 8-patchy-posix: fgetxattr failed on gfid=d91f6331-d394-479d-ab51-6bcf674ac3e0 while doing xattrop: Key:trusted.ec.dirty (Bad file descriptor) [Bad file descriptor]</div><div>[2019-01-12 12:25:25.903468] E [MSGID: 115073] [server-rpc-fops_v2.c:1805:server4_fxattrop_cbk] 0-patchy-server: 1486: FXATTROP 2 (d91f6331-d394-479d-ab51-6bcf674ac3e0), client: CTX_ID:b785c2b0-3453-4a03-b129-19e6ceeb5346-GRAPH_ID:0-PID:24147-HOST:softserve-moagrawa-test.1-PC_NAME:patchy-client-1-RECON_NO:-1, error-xlator: patchy-posix [Bad file descriptor]</div><div><br></div><div><br></div><div>Thanks,</div><div>Mohit Agrawal</div></div></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Jan 12, 2019 at 6:29 PM Mohit Agrawal <<a href="mailto:moagrawa@redhat.com">moagrawa@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="auto"><div dir="auto"><br></div><div dir="auto">For specific to "add-brick-and-validate-replicated-volume-options.t" i have posted a patch <a href="https://review.gluster.org/22015" target="_blank">https://review.gluster.org/22015</a>.</div><div dir="auto">For test case "ec/bug-1236065.t" I think the issue needs to be check by ec team</div><div dir="auto"><br></div><div dir="auto">On the brick side, it is showing below logs </div><div dir="auto"><br></div><div dir="auto">>>>>>>>>>>>>>>>>></div><div dir="auto"><br></div><div dir="auto">on wire in the future [Invalid argument]</div><div dir="auto">The message "I [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr] 0-dict: key 'trusted.ec.dirty' would not be sent on wire in the future [Invalid argument]" repeated 3 times between [2019-01-12 12:25:25.902828] and [2019-01-12 12:25:25.902992]</div><div dir="auto">[2019-01-12 12:25:25.903553] W [MSGID: 114031] [client-rpc-fops_v2.c:1614:client4_0_fxattrop_cbk] 0-patchy-client-1: remote operation failed [Bad file descriptor]</div><div dir="auto">[2019-01-12 12:25:25.903998] W [MSGID: 122040] [ec-common.c:1181:ec_prepare_update_cbk] 0-patchy-disperse-0: Failed to get size and version : FOP : 'FXATTROP' failed on gfid d91f6331-d394-479d-ab51-6bcf674ac3e0 [Input/output error]</div><div dir="auto">[2019-01-12 12:25:25.904059] W [fuse-bridge.c:1907:fuse_unlink_cbk] 0-glusterfs-fuse: 3259: UNLINK() /test/0.o => -1 (Input/output error)</div><div dir="auto"><br></div><div dir="auto">>>>>>>>>>>>>>>>>>>></div><div dir="auto"><br></div><div dir="auto">Test case is getting timed out because "volume heal $V0 full" command is stuck, look's like shd is getting stuck at getxattr</div><div dir="auto"><br></div><div dir="auto">>>>>>>>>>>>>>>.</div><div dir="auto"><br></div><div dir="auto">Thread 8 (Thread 0x7f83777fe700 (LWP 25552)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f83777fdbb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a8030880, child=<optimized out>, loc=0x7f83777fdbb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a80094b0, entry=<optimized out>, parent=0x7f83777fdde0, data=0x7f83a8030880) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a80094b0, loc=loc@entry=0x7f83777fdde0, pid=pid@entry=-6, data=data@entry=0x7f83a8030880, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a8030880, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a8030880) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 7 (Thread 0x7f8376ffd700 (LWP 25553)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f8376ffcbb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a80308f0, child=<optimized out>, loc=0x7f8376ffcbb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a800d110, entry=<optimized out>, parent=0x7f8376ffcde0, data=0x7f83a80308f0) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a800d110, loc=loc@entry=0x7f8376ffcde0, pid=pid@entry=-6, data=data@entry=0x7f83a80308f0, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a80308f0, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a80308f0) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 6 (Thread 0x7f83767fc700 (LWP 25554)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f83767fbbb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a8030960, child=<optimized out>, loc=0x7f83767fbbb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a8010af0, entry=<optimized out>, parent=0x7f83767fbde0, data=0x7f83a8030960) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a8010af0, loc=loc@entry=0x7f83767fbde0, pid=pid@entry=-6, data=data@entry=0x7f83a8030960, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a8030960, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a8030960) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 5 (Thread 0x7f8375ffb700 (LWP 25555)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f8375ffabb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a80309d0, child=<optimized out>, loc=0x7f8375ffabb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a80144d0, entry=<optimized out>, parent=0x7f8375ffade0, data=0x7f83a80309d0) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a80144d0, loc=loc@entry=0x7f8375ffade0, pid=pid@entry=-6, data=data@entry=0x7f83a80309d0, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a80309d0, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a80309d0) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 4 (Thread 0x7f83757fa700 (LWP 25556)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f83757f9bb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a8030a40, child=<optimized out>, loc=0x7f83757f9bb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a8017eb0, entry=<optimized out>, parent=0x7f83757f9de0, data=0x7f83a8030a40) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a8017eb0, loc=loc@entry=0x7f83757f9de0, pid=pid@entry=-6, data=data@entry=0x7f83a8030a40, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a8030a40, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a8030a40) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 3 (Thread 0x7f8374ff9700 (LWP 25557)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f8374ff8bb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a8030ab0, child=<optimized out>, loc=0x7f8374ff8bb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a801b890, entry=<optimized out>, parent=0x7f8374ff8de0, data=0x7f83a8030ab0) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a801b890, loc=loc@entry=0x7f8374ff8de0, pid=pid@entry=-6, data=data@entry=0x7f83a8030ab0, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a8030ab0, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a8030ab0) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 2 (Thread 0x7f8367fff700 (LWP 25558)):</div><div dir="auto">#0 0x00007f83bb70d945 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc910e5b in syncop_getxattr (subvol=<optimized out>, loc=loc@entry=0x7f8367ffebb0, dict=dict@entry=0x0, key=key@entry=0x7f83add06a28 "trusted.ec.heal", xdata_in=xdata_in@entry=0x0, xdata_out=xdata_out@entry=0x0) at syncop.c:1680</div><div dir="auto">#2 0x00007f83add02f27 in ec_shd_selfheal (healer=0x7f83a8030b20, child=<optimized out>, loc=0x7f8367ffebb0, full=<optimized out>) at ec-heald.c:161</div><div dir="auto">#3 0x00007f83add0325b in ec_shd_full_heal (subvol=0x7f83a801f270, entry=<optimized out>, parent=0x7f8367ffede0, data=0x7f83a8030b20) at ec-heald.c:294</div><div dir="auto">#4 0x00007f83bc930ac2 in syncop_ftw (subvol=0x7f83a801f270, loc=loc@entry=0x7f8367ffede0, pid=pid@entry=-6, data=data@entry=0x7f83a8030b20, fn=fn@entry=0x7f83add03140 <ec_shd_full_heal>) at syncop-utils.c:125</div><div dir="auto">#5 0x00007f83add03534 in ec_shd_full_sweep (healer=healer@entry=0x7f83a8030b20, inode=<optimized out>) at ec-heald.c:311</div><div dir="auto">#6 0x00007f83add0367b in ec_shd_full_healer (data=0x7f83a8030b20) at ec-heald.c:372</div><div dir="auto">#7 0x00007f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0</div><div dir="auto">#8 0x00007f83bafd634d in clone () from /usr/lib64/libc.so.6</div><div dir="auto">Thread 1 (Thread 0x7f83bcdd1780 (LWP 25383)):</div><div dir="auto">#0 0x00007f83bb70af57 in pthread_join () from /usr/lib64/libpthread.so.0</div><div dir="auto">#1 0x00007f83bc92eff8 in event_dispatch_epoll (event_pool=0x55af0a6dd560) at event-epoll.c:846</div><div dir="auto">#2 0x000055af0a4116b8 in main (argc=15, argv=0x7fff75610898) at glusterfsd.c:2848</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">>>>>>>>>>>>>>>>>>>>>>>>>>>.</div><div dir="auto"><br></div><div>Thanks,</div><div>Mohit Agrawal</div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri 11 Jan, 2019, 21:20 Shyam Ranganathan <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">We can check health on master post the patch as stated by Mohit below.<br>
<br>
Release-5 is causing some concerns as we need to tag the release<br>
yesterday, but we have the following 2 tests failing or coredumping<br>
pretty regularly, need attention on these.<br>
<br>
ec/bug-1236065.t<br>
glusterd/add-brick-and-validate-replicated-volume-options.t<br>
<br>
Shyam<br>
On 1/10/19 6:20 AM, Mohit Agrawal wrote:<br>
> I think we should consider regression-builds after merged the patch<br>
> (<a href="https://review.gluster.org/#/c/glusterfs/+/21990/" rel="noreferrer noreferrer" target="_blank">https://review.gluster.org/#/c/glusterfs/+/21990/</a>) <br>
> as we know this patch introduced some delay.<br>
> <br>
> Thanks,<br>
> Mohit Agrawal<br>
> <br>
> On Thu, Jan 10, 2019 at 3:55 PM Atin Mukherjee <<a href="mailto:amukherj@redhat.com" rel="noreferrer" target="_blank">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com" rel="noreferrer" target="_blank">amukherj@redhat.com</a>>> wrote:<br>
> <br>
> Mohit, Sanju - request you to investigate the failures related to<br>
> glusterd and brick-mux and report back to the list.<br>
> <br>
> On Thu, Jan 10, 2019 at 12:25 AM Shyam Ranganathan<br>
> <<a href="mailto:srangana@redhat.com" rel="noreferrer" target="_blank">srangana@redhat.com</a> <mailto:<a href="mailto:srangana@redhat.com" rel="noreferrer" target="_blank">srangana@redhat.com</a>>> wrote:<br>
> <br>
> Hi,<br>
> <br>
> As part of branching preparation next week for release-6, please<br>
> find<br>
> test failures and respective test links here [1].<br>
> <br>
> The top tests that are failing/dumping-core are as below and<br>
> need attention,<br>
> - ec/bug-1236065.t<br>
> - glusterd/add-brick-and-validate-replicated-volume-options.t<br>
> - readdir-ahead/bug-1390050.t<br>
> - glusterd/brick-mux-validation.t<br>
> - bug-1432542-mpx-restart-crash.t<br>
> <br>
> Others of interest,<br>
> - replicate/bug-1341650.t<br>
> <br>
> Please file a bug if needed against the test case and report the<br>
> same<br>
> here, in case a problem is already addressed, then do send back the<br>
> patch details that addresses this issue as a response to this mail.<br>
> <br>
> Thanks,<br>
> Shyam<br>
> <br>
> [1] Regression failures:<br>
> <a href="https://hackmd.io/wsPgKjfJRWCP8ixHnYGqcA?view" rel="noreferrer noreferrer" target="_blank">https://hackmd.io/wsPgKjfJRWCP8ixHnYGqcA?view</a><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" rel="noreferrer" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" rel="noreferrer" target="_blank">Gluster-devel@gluster.org</a>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
> <br>
> <br>
</blockquote></div>
</blockquote></div>