[Gluster-devel] Tests failing for distributed regression framework

Shyam Ranganathan srangana at redhat.com
Wed Jul 18 17:23:14 UTC 2018


On 07/18/2018 01:16 PM, Deepshikha Khandelwal wrote:
> Shyam,
> 
> Thank you for pointing this out. I've updated the logs for bug-990028.t test.

Yup, looked at it, ENOSPC failure is in setxattr on the brick, as we are
attempting to set a lot of them due to hardlinks to the file, failure
log is as follows,

[2018-07-18 12:50:07.298478]:++++++++++
G_LOG:tests/bugs/posix/bug-990028.t: TEST: 37 ln /mnt/glusterfs/0/file1
/mnt/glusterfs/0/file45 ++++++++++
[2018-07-18 12:50:07.307101] W [MSGID: 113117]
[posix-metadata.c:671:posix_set_parent_ctime] 0-patchy-posix: posix
parent set mdata failed on file [No such file or directory]
[2018-07-18 12:50:07.322628] W [MSGID: 113093]
[posix-gfid-path.c:51:posix_set_gfid2path_xattr] 0-patchy-posix: setting
gfid2path xattr failed on /d/backends/brick/file45: key =
trusted.gfid2path.4434be659b4d25e
4  [No space left on device]
[2018-07-18 12:50:07.322813] I [MSGID: 115062]
[server-rpc-fops_v2.c:1089:server4_link_cbk] 0-patchy-server: 333: LINK
/file43 (40ef3115-f818-4cc2-a5c3-64875f7a273a) ->
00000000-0000-0000-0000-000000000001/file4
5, client:
CTX_ID:98c24d79-4889-4aba-bc93-91e1d5d73abe-GRAPH_ID:0-PID:4993-HOST:distributed-testing.8b445247-2057-47e7-894f-41e4a91bb536-PC_NAME:patchy-client-0-RECON_NO:-0,
error-xlator: patchy-posix [No space
left on device]
[2018-07-18 12:50:07.335223]:++++++++++
G_LOG:tests/bugs/posix/bug-990028.t: TEST: 37 ln /mnt/glusterfs/0/file1
/mnt/glusterfs/0/file46 ++++++++++

Need to determine what is different in the backing XFS FS across
instances where this works and in the distributed instances (or to
determine what the options are to create XFS with the ability to not run
out of space when adding extended attrs and apply them to the
distributed test setup).

> On Wed, Jul 18, 2018 at 8:40 PM Shyam Ranganathan <srangana at redhat.com> wrote:
>>
>> On 07/18/2018 10:51 AM, Shyam Ranganathan wrote:
>>> On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
>>>> Hi all,
>>>>
>>>> There are tests which have been constantly failing for distributed
>>>> regression framework[1]. I would like to draw the maintainer's
>>>> attention to look at these two bugs[2]&[3] and help us to attain the
>>>> RCA for such failures.
>>>>
>>>> Until then, we're disabling these two blocking tests.
>>>>
>>>> [1] https://build.gluster.org/job/distributed-regression/
>>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
>>>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
>>>
>>> Bug updated with current progress:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c1
>>>
>>> Pasting it here for others to chime in based on past experience if any.
>>>
>>> <snip>
>>> This fails as follows,
>>> =========================
>>> TEST 52 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file44
>>> ln: failed to create hard link ‘/mnt/glusterfs/0/file44’: No space left
>>> on device
>>> RESULT 52: 1
>>> =========================
>>> (continues till the last file) IOW, file44-file50 fail creation
>>> =========================
>>> TEST 58 (line 37): ln /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file50
>>> ln: failed to create hard link ‘/mnt/glusterfs/0/file50’: No space left
>>> on device
>>> RESULT 58: 1
>>> =========================
>>>
>>> Post this the failures are due to attempts to inspect these files for
>>> metadata and attrs and such, so the failures are due to the above.
>>>
>>> At first I suspected max-hardlink setting, but this is at a default of
>>> 100, and we do not use any specific site.h or tuning when running in the
>>> distributed environment (as far as I can tell).
>>>
>>> Also, the test, when it fails, has only created 1 empty file and 42
>>> links to the same, this should not cause the bricks to run out of space.
>>>
>>> The Gluster logs till now did not throw up any surprises, or causes.
>>
>> Just realized that logs attached to the bug are not from this test
>> failure, requesting the right logs, so that we can possibly find the
>> root cause.
>>
>>> </snip>
>>>
>>>>
>>>> Thanks,
>>>> Deepshikha Khandelwal
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list