[Bugs] [Bug 1564071] directories are invisible on client side

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 18 14:57:22 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1564071



--- Comment #6 from g.amedick at uni-luebeck.de ---
Hi,

the affected dirs seem to heal by themselves after 2-4 days. There are new ones
popping up regularly now though.

we increased the log level from error to warning and entries like this are the
result on pretty much every brick (paths of user directories replaced with
$dir_… & we haven't seen files affected by the bug, but we don't know whether
all hidden-directory-appearances are reported to us or whether we increased the
loglevel before or after the last reported directory broke):

[2018-04-16 12:34:33.643937] W [MSGID: 120020]
[quota.c:2755:quota_rename_continue] $volume-quota: quota context not set in
inode (gfid:f44e77bc-a54c-4cb5-9f70-c581ed270f2d), considering file size as
zero while enforcing quota on new ancestry
[2018-04-16 12:44:33.979176] W [MSGID: 113103] [posix.c:282:posix_lookup]
$volume-posix: Found stale gfid handle
/srv/glusterfs/bricks/DATA111/data/.glusterfs/93/fa/93fa35bb-22fe-40dc-9415-f08186ab1c93,
removing it. [Stale file handle]
[2018-04-17 09:05:44.438907] A [MSGID: 120004] [quota.c:4998:quota_log_usage]
$volume-quota: Usage is above soft limit: 187.4TB used by /$dir_1
[2018-04-17 18:34:52.084247] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] $volume-posix: link
/srv/glusterfs/bricks/DATA111/data/$file_1 ->
/srv/glusterfs/bricks/DATA111/data/.glusterfs/10/21/1021cecf-08dc-48ab-a44e-6542cc8e75acfailed
 [File exists]
[2018-04-17 18:34:52.084325] E [MSGID: 113020] [posix.c:3162:posix_create]
$volume-posix: setting gfid on /srv/glusterfs/bricks/DATA111/data/$file_1
failed
[2018-04-17 20:57:10.613860] W [MSGID: 113001]
[posix.c:4421:posix_get_ancestry_non_directory] $volume-posix: listxattr failed
on/srv/glusterfs/bricks/DATA111/data/.glusterfs/8d/7d/8d7dc368-b229-4d41-921c-546627a03248
[No such file or directory]
[2018-04-17 20:57:10.614719] W [marker-quota.c:33:mq_loc_copy] 0-marker: src
loc is not valid
[2018-04-17 20:57:10.614818] E [marker-quota.c:1488:mq_initiate_quota_task]
$volume-marker: loc copy failed
The message "W [MSGID: 113001] [posix.c:4421:posix_get_ancestry_non_directory]
$volume-posix: listxattr failed
on/srv/glusterfs/bricks/DATA111/data/.glusterfs/8d/7d/8d7dc368-b229-4d41-921c-546627a03248
[No such file or directory]" repeated 1300 times between [2018-04-17
20:57:10.613860] and [2018-04-17 20:57:11.536419]
[2018-04-17 21:34:42.809053] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] $volume-posix: link
/srv/glusterfs/bricks/DATA111/data/$file_2 ->
/srv/glusterfs/bricks/DATA111/data/.glusterfs/21/db/21db009b-aa53-4d26-afb1-8f1667574530failed
 [File exists]
[2018-04-17 21:34:42.809115] E [MSGID: 113020] [posix.c:3162:posix_create]
$volume-posix: setting gfid on /srv/glusterfs/bricks/DATA111/data/$file_2
failed
[2018-04-17 21:34:42.809944] E [MSGID: 113018] [posix.c:552:posix_setattr]
$volume-posix: setattr (lstat) on
/srv/glusterfs/bricks/DATA111/data/.glusterfs/21/db/21db009b-aa53-4d26-afb1-8f1667574530
failed [No such file or directory]
[2018-04-17 21:34:42.811179] E [MSGID: 113001] [posix.c:4874:posix_getxattr]
$volume-posix: getxattr failed on
/srv/glusterfs/bricks/DATA111/data/.glusterfs/21/db/21db009b-aa53-4d26-afb1-8f1667574530:
trusted.glusterfs.dht.linkto  [No such file or directory]
[2018-04-18 09:08:02.273714] A [MSGID: 120004] [quota.c:4998:quota_log_usage]
$volume-quota: Usage is above soft limit: 188.8TB used by /dir_1
[2018-04-18 10:50:54.072890] A [MSGID: 120004] [quota.c:4998:quota_log_usage]
$volume-quota: Usage is above soft limit: 4.0TB used by /dir_2
[2018-04-18 10:50:54.073972] A [MSGID: 120004] [quota.c:4998:quota_log_usage]
$volume-quota: Usage is above soft limit: 4.0TB used by /dir_2
[2018-04-18 11:20:12.880347] W [MSGID: 120020]
[quota.c:2755:quota_rename_continue] $volume-quota: quota context not set in
inode (gfid:367de5fb-c7c3-4bde-a8fa-a3a2cafc6abc), considering file size as
zero while enforcing quota on new ancestry
[2018-04-18 11:20:16.865349] W [MSGID: 120020]
[quota.c:2755:quota_rename_continue] $volume-quota: quota context not set in
inode (gfid:cd8877f1-c5d5-47bd-8a60-c3224c13e724), considering file size as
zero while enforcing quota on new ancestry
[2018-04-18 11:20:17.510650] W [MSGID: 120020]
[quota.c:2755:quota_rename_continue] $volume-quota: quota context not set in
inode (gfid:4eb88aac-8e95-4eac-8d48-1615a135efcd), considering file size as
zero while enforcing quota on new ancestry

We are unsure whether this is related or relevant.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list