<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="m_5441747763864646546gmail_attr">On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <<a href="mailto:g.amedick@uni-luebeck.de" target="_blank">g.amedick@uni-luebeck.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
<br>
we have a problem with a distributed dispersed volume (GlusterFS 3.12). We have files that lost their permissions or gained sticky bits. The files<br>
themselves seem to be okay.<br>
<br>
It looks like this:<br>
<br>
# ls -lah $file1<br>
---------- 1 www-data www-data 45M Jan 12 07:01 $file1<br>
<br>
# ls -lah $file2<br>
-rw-rwS--T 1 $user $group 11K Jan 9 11:48 $file2<br>
<br>
# ls -lah $file3<br>
---------T 1 $user $group 6.8M Jan 12 08:17 $file3<br>
<br></blockquote><div>These are linkto files (internal dht files) and should not be visible on the mount point. Are they consistently visible like this or do they revert to the proper permissions after some time?</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
This is not what the permissions are supposed to look. They were 644 or 660 before. And they definitely had no sticky bits.<br>
The permissions on the bricks match what I see on client side. So I think the original permissions are lost without a chance to recover them, right?<br>
<br>
<br>
With some files with weird looking permissions (but not with all of them), I can do this:<br>
# ls -lah $path/$file4<br>
-rw-r--r-- 1 $user $group 6.0G Oct 11 09:34 $path/$file4<br>
ls -lah $path | grep $file4<br>
-rw-r-Sr-T 1 $user$group 6.0G Oct 11 09:34 $file4 </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
So, the permissions I see depend on how I'm querying them. The permissions on brick side agree with the ladder result, stat sees the former. I'm not sure how that works.<br></blockquote><div>The S and T bits indicate that a file is being migrated. The difference seems to be because of the way lookup versus readdirp handle this - this looks like a bug. Lookup will strip out the internal permissions set. I don't think readdirp does. This is happening because a rebalance is in progress.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
We know for at least a part of those files that they were okay at December 19th. We got the first reports of weird-looking permissions at January<br>
12th. Between that, there was a rebalance running (January 7th to January 11th). During that rebalance, a node was offline for a longer period of time<br>
due to hardware issues. The output of "gluster volume heal $VOLUME info" shows no files though.<br>
<br>
For all files with broken permissions we found so far, the following lines are in the rebalance log:<br>
<br>
[2019-01-07 09:31:11.004802] I [MSGID: 109045] [dht-common.c:2456:dht_lookup_cbk] 0-$VOLUME-dht: linkfile not having link subvol for $file5<br>
[2019-01-07 09:31:11.262273] I [MSGID: 109069] [dht-common.c:1410:dht_lookup_unlink_of_false_linkto_cbk] 0-$VOLUME-dht: lookup_unlink returned with<br>
op_ret -> 0 and op-errno -> 0 for $file5<br>
[2019-01-07 09:31:11.266014] I [dht-rebalance.c:1570:dht_migrate_file] 0-$VOLUME-dht: $file5: attempting to move from $VOLUME-readdir-ahead-0 to<br>
$VOLUME-readdir-ahead-5<br>
[2019-01-07 09:31:11.278120] I [dht-rebalance.c:1570:dht_migrate_file] 0-$VOLUME-dht: $file5: attempting to move from $VOLUME-readdir-ahead-0 to<br>
$VOLUME-readdir-ahead-5<br>
[2019-01-07 09:31:11.732175] W [dht-rebalance.c:2159:dht_migrate_file] 0-$VOLUME-dht: $file5: failed to perform removexattr on $VOLUME-readdir-ahead-0 <br>
(No data available)<br>
[2019-01-07 09:31:11.737319] W [MSGID: 109023] [dht-rebalance.c:2179:dht_migrate_file] 0-$VOLUME-dht: $file5: failed to do a stat on $VOLUME-readdir-<br>
ahead-0 [No such file or directory]<br>
[2019-01-07 09:31:11.744382] I [MSGID: 109022] [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed migration of $file5 from subvolume<br>
$VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5<br>
[2019-01-07 09:31:11.744676] I [MSGID: 109022] [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed migration of $file5 from subvolume<br>
$VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5<br>
<br>
<br>
<br>
I've searched the brick logs for $file5 with broken permissions and found this on all bricks from (I think) the subvolume $VOLUME-readdir-ahead-5:<br>
<br>
[2019-01-07 09:32:13.821545] I [MSGID: 113030] [posix.c:2171:posix_unlink] 0-$VOLUME-posix: open-fd-key-status: 0 for $file5<br>
[2019-01-07 09:32:13.821609] I [MSGID: 113031] [posix.c:2084:posix_skip_non_linkto_unlink] 0-posix: linkto_xattr status: 0 for $file5<br>
<br>
<br>
<br>
Also, we noticed that many directories got their modification time updated. It was set to the rebalance date. Is that supposed to happen?<br>
<br>
<br>
We had parallel-readdir enabled during the rebalance. We disabled it since we had empty directories that couldn't be deleted. I was able to delete<br>
those dirs after that. <br></blockquote><div><br></div><div>Was this disabled during the rebalance? parallel-readdirp changes the volume graph for clients but not for the rebalance process causing it to fail to find the linkto subvols.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Also, we have directories who lost their GFID on some bricks. Again.</blockquote><div><br></div><div>Is this the missing symlink problem that was reported earlier? </div><div><br></div><div>Regards,</div><div>Nithya</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
What happened? Can we do something to fix this? And could that happen again?<br>
<br>
We want to upgrade to 4.1 soon. Is it safe to do that or could it make things worse?<br>
<br>
Kind regards<br>
<br>
Gudrun Amedick_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>