[Gluster-devel] ./tests/basic/uss.t is timing out in release-6 branch

FNU Raghavendra Manjunath rabhat at redhat.com
Tue Apr 30 14:42:34 UTC 2019

The failure looks similar to the issue I had mentioned in [1]

In short for some reason the cleanup (the cleanup function that we call in
our .t files) seems to be taking more time and also not cleaning up
properly. This leads to problems for the 2nd iteration (where basic things
such as volume creation or volume start itself fails due to ENODATA or
ENOENT errors).

The 2nd iteration of the uss.t ran had the following errors.

"[2019-04-29 09:08:15.275773]:++++++++++ G_LOG:./tests/basic/uss.t: TEST:
39 gluster --mode=script --wignore volume set patchy nfs.disable false
[2019-04-29 09:08:15.390550]  : volume set patchy nfs.disable false :
[2019-04-29 09:08:15.404624]:++++++++++ G_LOG:./tests/basic/uss.t: TEST: 42
gluster --mode=script --wignore volume start patchy ++++++++++
[2019-04-29 09:08:15.468780]  : volume start patchy : FAILED : Failed to
get extended attribute trusted.glusterfs.volume-id for brick dir
/d/backends/3/patchy_snap_mnt. Reason : No data available

These are the initial steps to create and start volume. Why
trusted.glusterfs.volume-id extended attribute is absent is not sure. The
analysis in [1] had errors of ENOENT (i.e. export directory itself was
I suspect this to be because of some issue with the cleanup mechanism at
the end of the tests.

[1] https://lists.gluster.org/pipermail/gluster-devel/2019-April/056104.html

On Tue, Apr 30, 2019 at 8:37 AM Sanju Rakonde <srakonde at redhat.com> wrote:

> Hi Raghavendra,
> ./tests/basic/uss.t is timing out in release-6 branch consistently. One
> such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can
> you please look into this?
> --
> Thanks,
> Sanju
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190430/08a7ddba/attachment.html>

More information about the Gluster-devel mailing list