[Gluster-users] No removes shards, listed in "./shard/.remove_me/"

Vinayakswami Hariharmath vharihar at redhat.com
Fri Dec 4 10:42:16 UTC 2020

Hello Mikhail,

Yes, If you delete any file from the gluster mount, every time the
entries in .shard/.remove_me are checked and tried for deletion of

The issue seems to be related to link-to-file and your last
comment(https://bugzilla.redhat.com/show_bug.cgi?id=1568521#c20) quite
useful to analyze the situation. I don't see similar information in
the attached logs.
Can you please give a full set of the above-attached logs? Also, would
you please check your brick status whether a few of them are
completely full?



On Fri, Dec 4, 2020 at 1:39 PM Михаил Гусев <gusevmk.uni at yandex.ru> wrote:

> Events:
> 1) Generate many files test-* on client side to /mnt/glusterfs-mountpoint
> (via for i in .. ; do dd if=/dev/zero .. ). File size = 100 G or 1T (amount
> 38 tb, gluster volume - 48 tb)
> 2) Start operation rm -rf /mnt/glusterfs-mountpoin/test-*
> 3) There are no files /mnt/glusterfs-mountpoin/test-*, but shards are
> still on bricks file systems.
> Logs of glusterfs server and client (server-log.txt and client-log.txt -
> unix generated) (period - rm -rf operation) in attachment
> 04.12.2020, 09:31, "Vinayakswami Hariharmath" <vharihar at redhat.com>:
> Hello,
> Bit of background about sharded file deletion:
> There are 2 parts of sharded files 1. base file (1st shard or reference
> shard) 2. shards of the base file stored as GFID.index
> When we delete a sharded file
> 1. firstly entry of a base file created (In the name of GFID) under
> .shard/.remove_me
> 2. next, the base file will be unlinked
> 3. in the background, the associated shards will be cleaned, and then
> finally reference entry present at .shard/.remove_me will be removed
> The reference created under .shard/.remove_me always referred to build the
> path to delete the associated shards. So the background thread picks up the
> ".shard/remove_me" entries, builds the shards path, and deletes them.
> So with your description, It looks like steps 1 and 2 are done but the
> background thread is getting ESTALE while cleaning up those
> .shard/.remove_me entries, the shards left undelete and space is not freed
> up.
> It looks strange, why you are getting ESTALE though the entry is present
> at .shard/.remove_me. Can you please post the complete logs of the time you
> performed the 1st deletion? History of events also helpful to analyze the
> issue
> Regards
> Vh
> On Fri, Dec 4, 2020 at 11:33 AM Михаил Гусев <gusevmk.uni at yandex.ru>
> wrote:
> Hello, I think my problem is related with this issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=1568521
> After remove files from client (rm -rf /mountpoint/*), I get many error in
> gluster mnt log (client side):
> E [MSGID: 133021] [shard.c:3761:shard_delete_shards] 0-<volume
> name>-shard: Failed to clean up shards of gfid 4b5afa49-5446-49e2-a7ba-1b4f2ffadb12
> [Stale file handle]
> Files in mountpoint was deleted, but there are no available space after
> this operation.
> I did check - there are no files in directory .glusterfs/unlink, but many
> files are in .shard/.remove_me/
> As far as I understand glusterd server process have to check files in
> .shard/.remove_me/ and if no mapping in .glusterfs/unlink, shards must be
> removed.. But seems it not working.
> # glusterd --version
> glusterfs 8.2
> # cat /etc/redhat-release
> Red Hat Enterprise Linux release 8.3 (Ootpa)
> gluster volume type: dispersed-distribute (sharding is enabled)
> ________
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201204/aafaf4db/attachment.html>

More information about the Gluster-users mailing list