[Gluster-users] Files losing permissions in GlusterFS 3.12
Nithya Balachandran
nbalacha at redhat.com
Thu Jan 31 09:16:30 UTC 2019
On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick <
g.amedick at uni-luebeck.de> wrote:
> Hi,
>
> a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb
> Frank Ruehlemann:
> > Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran:
> > >
> > > On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick <
> > > g.amedick at uni-luebeck.de> wrote:
> > >
> > > >
> > > > Hi all,
> > > >
> > > > we have a problem with a distributed dispersed volume (GlusterFS
> 3.12). We
> > > > have files that lost their permissions or gained sticky bits. The
> files
> > > > themselves seem to be okay.
> > > >
> > > > It looks like this:
> > > >
> > > > # ls -lah $file1
> > > > ---------- 1 www-data www-data 45M Jan 12 07:01 $file1
> > > >
> > > > # ls -lah $file2
> > > > -rw-rwS--T 1 $user $group 11K Jan 9 11:48 $file2
> > > >
> > > > # ls -lah $file3
> > > > ---------T 1 $user $group 6.8M Jan 12 08:17 $file3
> > > >
> > > > These are linkto files (internal dht files) and should not be
> visible on
> > > the mount point. Are they consistently visible like this or do they
> revert
> > > to the proper permissions after some time?
> > They didn't heal yet, even after more than 4 weeks. Therefore we decided
> > to recommend our users to fix their files by setting the correct
> > permissions again, which worked without problems. But for analysis
> > reasons we still have some broken files nobody touched yet.
> >
> > We know these linkto files but they were never visible to clients. We
> > did these ls-commands on a client, not on a brick.
>
> They have linkfile permissions but on brick side, it looks like this:
>
> root at gluster06:~# ls -lah /$brick/$file3
> ---------T 2 $user $group 1.7M Jan 12 08:17 /$brick/$file3
>
> That seems to be too big for a linkfile. Also, there is no file it could
> link to. There's no other file with that name at that path on any other
> subvolume.
>
This sounds like the rebalance failed to transition the file from a linkto
to a data file once the migration was complete. Please check the rebalance
logs on all nodes for any messages that refer to this file.
If you still see any such files, please check the its xattrs directly on
the brick. You should see one called trusted.glusterfs.dht.linkto. Let me
know if that is missing.
Regards,
Nithya
>
>
> >
> > >
> > > >
> > > > This is not what the permissions are supposed to look. They were 644
> or
> > > > 660 before. And they definitely had no sticky bits.
> > > > The permissions on the bricks match what I see on client side. So I
> think
> > > > the original permissions are lost without a chance to recover them,
> right?
> > > >
> > > >
> > > > With some files with weird looking permissions (but not with all of
> them),
> > > > I can do this:
> > > > # ls -lah $path/$file4
> > > > -rw-r--r-- 1 $user $group 6.0G Oct 11 09:34 $path/$file4
> > > > ls -lah $path | grep $file4
> > > > -rw-r-Sr-T 1 $user$group 6.0G Oct 11 09:34 $file4
> > >
> > > >
> > > > So, the permissions I see depend on how I'm querying them. The
> permissions
> > > > on brick side agree with the ladder result, stat sees the former.
> I'm not
> > > > sure how that works.
> > > >
> > > The S and T bits indicate that a file is being migrated. The difference
> > > seems to be because of the way lookup versus readdirp handle this -
> this
> > > looks like a bug. Lookup will strip out the internal permissions set. I
> > > don't think readdirp does. This is happening because a rebalance is in
> > > progress.
> > There is no active rebalance. At least in "gluster volume rebalance
> > $VOLUME status" is none visible.
> >
> > And in the rebalance log file of this volume is the last line:
> > "[2019-01-11 02:14:50.101944] W … received signum (15), shutting down"
> >
> > >
> > > >
> > > > We know for at least a part of those files that they were okay at
> December
> > > > 19th. We got the first reports of weird-looking permissions at
> January
> > > > 12th. Between that, there was a rebalance running (January 7th to
> January
> > > > 11th). During that rebalance, a node was offline for a longer period
> of time
> > > > due to hardware issues. The output of "gluster volume heal $VOLUME
> info"
> > > > shows no files though.
> > > >
> > > > For all files with broken permissions we found so far, the following
> lines
> > > > are in the rebalance log:
> > > >
> > > > [2019-01-07 09:31:11.004802] I [MSGID: 109045]
> > > > [dht-common.c:2456:dht_lookup_cbk] 0-$VOLUME-dht: linkfile not
> having link
> > > > subvol for $file5
> > > > [2019-01-07 09:31:11.262273] I [MSGID: 109069]
> > > > [dht-common.c:1410:dht_lookup_unlink_of_false_linkto_cbk]
> 0-$VOLUME-dht:
> > > > lookup_unlink returned with
> > > > op_ret -> 0 and op-errno -> 0 for $file5
> > > > [2019-01-07 09:31:11.266014] I
> [dht-rebalance.c:1570:dht_migrate_file]
> > > > 0-$VOLUME-dht: $file5: attempting to move from
> $VOLUME-readdir-ahead-0 to
> > > > $VOLUME-readdir-ahead-5
> > > > [2019-01-07 09:31:11.278120] I
> [dht-rebalance.c:1570:dht_migrate_file]
> > > > 0-$VOLUME-dht: $file5: attempting to move from
> $VOLUME-readdir-ahead-0 to
> > > > $VOLUME-readdir-ahead-5
> > > > [2019-01-07 09:31:11.732175] W
> [dht-rebalance.c:2159:dht_migrate_file]
> > > > 0-$VOLUME-dht: $file5: failed to perform removexattr on
> > > > $VOLUME-readdir-ahead-0
> > > > (No data available)
> > > > [2019-01-07 09:31:11.737319] W [MSGID: 109023]
> > > > [dht-rebalance.c:2179:dht_migrate_file] 0-$VOLUME-dht: $file5:
> failed to do
> > > > a stat on $VOLUME-readdir-
> > > > ahead-0 [No such file or directory]
> > > > [2019-01-07 09:31:11.744382] I [MSGID: 109022]
> > > > [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed
> migration
> > > > of $file5 from subvolume
> > > > $VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5
> > > > [2019-01-07 09:31:11.744676] I [MSGID: 109022]
> > > > [dht-rebalance.c:2218:dht_migrate_file] 0-$VOLUME-dht: completed
> migration
> > > > of $file5 from subvolume
> > > > $VOLUME-readdir-ahead-0 to $VOLUME-readdir-ahead-5
> > > >
> > > >
> > > >
> > > > I've searched the brick logs for $file5 with broken permissions and
> found
> > > > this on all bricks from (I think) the subvolume
> $VOLUME-readdir-ahead-5:
> > > >
> > > > [2019-01-07 09:32:13.821545] I [MSGID: 113030]
> [posix.c:2171:posix_unlink]
> > > > 0-$VOLUME-posix: open-fd-key-status: 0 for $file5
> > > > [2019-01-07 09:32:13.821609] I [MSGID: 113031]
> > > > [posix.c:2084:posix_skip_non_linkto_unlink] 0-posix: linkto_xattr
> status: 0
> > > > for $file5
> > > >
> > > >
> > > >
> > > > Also, we noticed that many directories got their modification time
> > > > updated. It was set to the rebalance date. Is that supposed to
> happen?
> > > >
> > > >
> > > > We had parallel-readdir enabled during the rebalance. We disabled it
> since
> > > > we had empty directories that couldn't be deleted. I was able to
> delete
> > > > those dirs after that.
> > > >
> > > Was this disabled during the rebalance? parallel-readdirp changes the
> > > volume graph for clients but not for the rebalance process causing it
> to
> > > fail to find the linkto subvols.
> > Yes, parallel-readdirp was enabled during the rebalance. But we disabled
> > it after some files where invisible on the client side again.
>
> The timetable looks like this:
>
> December 12th: parallel-readdir enabled
> January 7th: rebalance started
> January 11th/12th: rebalance finished (varied a bit, some servers were
> faster)
> January 15th: parallel-readdir disabled
>
> >
> > >
> > > >
> > > >
> > > > Also, we have directories who lost their GFID on some bricks. Again.
> > >
> > > Is this the missing symlink problem that was reported earlier?
>
> Looks like. I had a dir with missing GFID on one brick, I couldn't see
> some files on client side, I recreated the GFID symlink and everything was
> fine
> again.
> And in the brick log, I had this entry (with
> 1d372a8a-4958-4700-8ef1-fa4f756baad3 being the GFID of the dir in question):
>
> [2019-01-13 17:57:55.020859] W [MSGID: 113103] [posix.c:301:posix_lookup]
> 0-$VOLUME-posix: Found stale gfid handle
> /srv/glusterfs/bricks/$brick/data/.glusterfs/1d/37/1d372a8a-4958-4700-8ef1-fa4f756baad3,
> removing it. [No such file or directory]
>
> Very familiar. At least, I know how to fix that :D
>
> Kind regards
>
> Gudrun
>
> > >
> > > Regards,
> > > Nithya
> > >
> > > >
> > > >
> > > >
> > > >
> > > > What happened? Can we do something to fix this? And could that happen
> > > > again?
> > > >
> > > > We want to upgrade to 4.1 soon. Is it safe to do that or could it
> make
> > > > things worse?
> > > >
> > > > Kind regards
> > > >
> > > > Gudrun Amedick_______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190131/3600dc17/attachment.html>
More information about the Gluster-users
mailing list