[Gluster-users] Deleted file sometimes remains in .glusterfs/unlink
David Spisla
spisla80 at gmail.com
Tue Nov 20 10:03:57 UTC 2018
Hello Ravi,
I am using Gluster v4.1.5. I have replica 4 volume. This is the info:
Volume Name: testv1
Type: Replicate
Volume ID: a5b2d650-4e93-4334-94bb-3105acb112d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: fs-davids-c1-n1:/gluster/brick1/glusterbrick
Brick2: fs-davids-c1-n2:/gluster/brick1/glusterbrick
Brick3: fs-davids-c1-n3:/gluster/brick1/glusterbrick
Brick4: fs-davids-c1-n4:/gluster/brick1/glusterbrick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
user.smb: disable
features.read-only: off
features.worm: off
features.worm-file-level: on
features.retention-mode: enterprise
features.default-retention-period: 120
network.ping-timeout: 10
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.nl-cache: on
performance.nl-cache-timeout: 600
client.event-threads: 32
server.event-threads: 32
cluster.lookup-optimize: on
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
performance.cache-samba-metadata: on
performance.cache-ima-xattrs: on
performance.io-thread-count: 64
cluster.use-compound-fops: on
performance.cache-size: 512MB
performance.cache-refresh-timeout: 10
performance.read-ahead: off
performance.write-behind-window-size: 4MB
performance.write-behind: on
storage.build-pgfid: on
auth.ssl-allow: *
client.ssl: on
server.ssl: on
features.utime: on
storage.ctime: on
features.bitrot: on
features.scrub: Active
features.scrub-freq: daily
cluster.enable-shared-storage: enable
Regards
David
Am Di., 20. Nov. 2018 um 07:33 Uhr schrieb Ravishankar N <
ravishankar at redhat.com>:
>
>
> On 11/19/2018 08:18 PM, David Spisla wrote:
>
> Hello Gluster Community,
>
> sometimes it happens that a file accessed via FUSE or SMB will remain in
> .glusterfs/unlink after delete it. The command 'df -hT' still prints the
> volume capacity before the files was deleted. Another observation is that
> after waiting a hole nigth the file is removed completely and there is the
> correct capacit . Is this behaviour "works as design"?
>
> Is this a replicate volume? Files end up in .glusterfs/unlink post
> deletion only if there is still an fd open on the file. Perhaps there was
> an on going data-self heal or another application had not yet closed the
> file descriptor?
> Which version of gluster are you using and what is the volume info?
> -Ravi
>
>
> The issue was mentioned here already:
> https://lists.gluster.org/pipermail/gluster-devel/2016-July/049952.html
>
> and there seems to be a fix . But unfortunately it still occurs and there
> is only the workaround to restart the brick processes or wait for some
> hours.
>
> Regards
> David Spisla
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181120/ddb9be27/attachment.html>
More information about the Gluster-users
mailing list