<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick <<a href="mailto:g.amedick@uni-luebeck.de">g.amedick@uni-luebeck.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb Frank Ruehlemann:<br>
> Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran:<br>
> > <br>
> > On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <<br>
> > <a href="mailto:g.amedick@uni-luebeck.de" target="_blank">g.amedick@uni-luebeck.de</a>> wrote:<br>
> > <br>
> > > <br>
> > > Hi all,<br>
> > > <br>
> > > we have a problem with a distributed dispersed volume (GlusterFS 3.12). We<br>
> > > have files that lost their permissions or gained sticky bits. The files<br>
> > > themselves seem to be okay.<br>
> > > <br>
> > > It looks like this:<br>
> > > <br>
> > > # ls -lah $file1<br>
> > > ---------- 1 www-data www-data 45M Jan 12 07:01 $file1<br>
> > > <br>
> > > # ls -lah $file2<br>
> > > -rw-rwS--T 1 $user $group 11K Jan 9 11:48 $file2<br>
> > > <br>
> > > # ls -lah $file3<br>
> > > ---------T 1 $user $group 6.8M Jan 12 08:17 $file3<br>
> > > <br>
> > > These are linkto files (internal dht files) and should not be visible on<br>
> > the mount point. Are they consistently visible like this or do they revert<br>
> > to the proper permissions after some time?<br>
> They didn't heal yet, even after more than 4 weeks. Therefore we decided<br>
> to recommend our users to fix their files by setting the correct<br>
> permissions again, which worked without problems. But for analysis<br>
> reasons we still have some broken files nobody touched yet.<br>
> <br>
> We know these linkto files but they were never visible to clients. We<br>
> did these ls-commands on a client, not on a brick.<br>
<br>
They have linkfile permissions but on brick side, it looks like this:<br>
<br>
root@gluster06:~# ls -lah /$brick/$file3<br>
---------T 2 $user $group 1.7M Jan 12 08:17 /$brick/$file3<br>
<br>
That seems to be too big for a linkfile. Also, there is no file it could link to. There's no other file with that name at that path on any other<br>
subvolume.<br></blockquote><div><br></div><div>This sounds like the rebalance failed to transition the file from a linkto to a data file once the migration was complete. Please check the rebalance logs on all nodes for any messages that refer to this file.</div><div>If you still see any such files, please check the its xattrs directly on the brick. You should see one called trusted.glusterfs.dht.linkto. Let me know if that is missing.</div><div><br></div><div>Regards,</div><div>Nithya</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<br>
> <br>
> > <br>
> > > <br>
> > > This is not what the permissions are supposed to look. They were 644 or<br>
> > > 660 before. And they definitely had no sticky bits.<br>
> > > The permissions on the bricks match what I see on client side. So I think<br>
> > > the original permissions are lost without a chance to recover them, right?<br>
> > > <br>
> > > <br>
> > > With some files with weird looking permissions (but not with all of them),<br>
> > > I can do this:<br>
> > > # ls -lah $path/$file4<br>
> > > -rw-r--r-- 1 $user $group 6.0G Oct 11 09:34 $path/$file4<br>
> > > ls -lah $path | grep $file4<br>
> > > -rw-r-Sr-T 1 $user$group 6.0G Oct 11 09:34 $file4<br>
> > <br>
> > > <br>
> > > So, the permissions I see depend on how I'm querying them. The permissions<br>
> > > on brick side agree with the ladder result, stat sees the former. I'm not<br>
> > > sure how that works.<br>
> > > <br>
> > The S and T bits indicate that a file is being migrated. The difference<br>
> > seems to be because of the way lookup versus readdirp handle this - this<br>
> > looks like a bug. Lookup will strip out the internal permissions set. I<br>
> > don't think readdirp does. This is happening because a rebalance is in<br>
> > progress.<br>
> There is no active rebalance. At least in "gluster volume rebalance<br>
> $VOLUME status" is none visible.<br>
> <br>
> And in the rebalance log file of this volume is the last line:<br>
> "[2019-01-11 02:14:50.101944] W … received signum (15), shutting down"<br>
> <br>
> > <br>
> > > <br>
> > > We know for at least a part of those files that they were okay at December<br>
> > > 19th. We got the first reports of weird-looking permissions at January<br>
> > > 12th. Between that, there was a rebalance running (January 7th to January<br>
> > > 11th). During that rebalance, a node was offline for a longer period of time<br>
> > > due to hardware issues. The output of "gluster volume heal $VOLUME info"<br>
> > > shows no files though.<br>
> > > <br>
> > > For all files with broken permissions we found so far, the following lines<br>
> > > are in the rebalance log:<br>
> > > <br>
> > > [2019-01-07 09:31:11.004802] I [MSGID: 109045]<br>
> > > [dht-common.c:2456:dht_lookup_cbk] 0-$VOLUME-dht: linkfile not having link<br>
> > > subvol for $file5<br>
> > > [2019-01-07 09:31:11.262273] I [MSGID: 109069]<br>
> > > [dht-common.c:1410:dht_lookup_unlink_of_false_linkto_cbk] 0-$VOLUME-dht:<br>
> > > lookup_unlink returned with<br>
> > > op_ret -> 0 and op-errno -> 0 for $file5<br>
> > > [2019-01-07 09:31:11.266014] I [dht-rebalance.c:1570:dht_migrate_file]<br>
> > > 0-$VOLUME-dht: $file5: attempting to move from $VOLUME-readdir-ahead-0 to<br>
> > > $VOLUME-readdir-ahead-5<br>
> > > [2019-01-07 09:31:11.278120] I [dht-rebalance.c:1570:dht_migrate_file]<br>
> > > 0-$VOLUME-dht: $file5: attempting to move from $VOLUME-readdir-ahead-0 to<br>
> > > $VOLUME-readdir-ahead-5<br>
> > > [2019-01-07 09:31:11.732175] W [dht-rebalance.c:2159:dht_migrate_file]<br>
> > > 0-$VOLUME-dht: $file5: failed to perform removexattr on<br>
> > > $VOLUME-readdir-ahead-0<br>
> > > (No data available)<br>
> > > [2019-01-07 09:31:11.737319] W [MSGID: 109023]<br>
> > > [dht-rebalance.c:2179:dht_migrate_file] 0-$VOLUME-dht: $file5: failed to do<br>
> > > a stat on $VOLUME-readdir-<br>
> > > ahead-0 [No such file or directory]<br>
> > > [2019-01-07 09:31:11.744382] I [MSGID: 109022]<br>
> > > [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed migration<br>
> > > of $file5 from subvolume<br>
> > > $VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5<br>
> > > [2019-01-07 09:31:11.744676] I [MSGID: 109022]<br>
> > > [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed migration<br>
> > > of $file5 from subvolume<br>
> > > $VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5<br>
> > > <br>
> > > <br>
> > > <br>
> > > I've searched the brick logs for $file5 with broken permissions and found<br>
> > > this on all bricks from (I think) the subvolume $VOLUME-readdir-ahead-5:<br>
> > > <br>
> > > [2019-01-07 09:32:13.821545] I [MSGID: 113030] [posix.c:2171:posix_unlink]<br>
> > > 0-$VOLUME-posix: open-fd-key-status: 0 for $file5<br>
> > > [2019-01-07 09:32:13.821609] I [MSGID: 113031]<br>
> > > [posix.c:2084:posix_skip_non_linkto_unlink] 0-posix: linkto_xattr status: 0<br>
> > > for $file5<br>
> > > <br>
> > > <br>
> > > <br>
> > > Also, we noticed that many directories got their modification time<br>
> > > updated. It was set to the rebalance date. Is that supposed to happen?<br>
> > > <br>
> > > <br>
> > > We had parallel-readdir enabled during the rebalance. We disabled it since<br>
> > > we had empty directories that couldn't be deleted. I was able to delete<br>
> > > those dirs after that.<br>
> > > <br>
> > Was this disabled during the rebalance? parallel-readdirp changes the<br>
> > volume graph for clients but not for the rebalance process causing it to<br>
> > fail to find the linkto subvols.<br>
> Yes, parallel-readdirp was enabled during the rebalance. But we disabled<br>
> it after some files where invisible on the client side again.<br>
<br>
The timetable looks like this:<br>
<br>
December 12th: parallel-readdir enabled<br>
January 7th: rebalance started<br>
January 11th/12th: rebalance finished (varied a bit, some servers were faster)<br>
January 15th: parallel-readdir disabled<br>
<br>
> <br>
> > <br>
> > > <br>
> > > <br>
> > > Also, we have directories who lost their GFID on some bricks. Again.<br>
> > <br>
> > Is this the missing symlink problem that was reported earlier?<br>
<br>
Looks like. I had a dir with missing GFID on one brick, I couldn't see some files on client side, I recreated the GFID symlink and everything was fine<br>
again.<br>
And in the brick log, I had this entry (with 1d372a8a-4958-4700-8ef1-fa4f756baad3 being the GFID of the dir in question):<br>
<br>
[2019-01-13 17:57:55.020859] W [MSGID: 113103] [posix.c:301:posix_lookup] 0-$VOLUME-posix: Found stale gfid handle<br>
/srv/glusterfs/bricks/$brick/data/.glusterfs/1d/37/1d372a8a-4958-4700-8ef1-fa4f756baad3, removing it. [No such file or directory]<br>
<br>
Very familiar. At least, I know how to fix that :D<br>
<br>
Kind regards<br>
<br>
Gudrun<br>
<br>
> > <br>
> > Regards,<br>
> > Nithya<br>
> > <br>
> > > <br>
> > > <br>
> > > <br>
> > > <br>
> > > What happened? Can we do something to fix this? And could that happen<br>
> > > again?<br>
> > > <br>
> > > We want to upgrade to 4.1 soon. Is it safe to do that or could it make<br>
> > > things worse?<br>
> > > <br>
> > > Kind regards<br>
> > > <br>
> > > Gudrun Amedick_______________________________________________<br>
> > > Gluster-users mailing list<br>
> > > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > _______________________________________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>