[Gluster-users] Rebalance issue on 3.5.3

Joe Julian joe at julianfamily.org
Wed Mar 11 14:51:59 UTC 2015


Those files are dht link files. Check out the extended attributes, "getfattr -m . -d" 

On March 10, 2015 7:30:33 AM PDT, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote:
>Hi,
>
>
>I launched a couple a days ago a rebalance on my gluster
>distribute-replicate volume 
>(see below) through its CLI, while allowing my users to continue using
>the volume.
>
>Yesterday, they managed to fill completely the volume. It now results
>in unavailable 
>files on the client (using fuse) with the message "Transport endpoint
>is not 
>connected". Investigating  to associated files on the bricks, I noticed
>that these are 
>displayed with ls -l as 
>---------T 2 user group 0 Jan 15 22:00 file
>Performing a 
>ls -lR /data/glusterfs/home/brick1/* | grep -F -- "---------T"
>on a single brick gave me a LOT of files in that above-mentioned state.
>
>Why are the files in that state ?
>
>Did I lose all these files or can they still be recovered from the
>replicate copy of 
>another brick ?
>
>
>Regards,
>
>
>Alessandro.
>
>
>gluster volume info home output:
>Volume Name: home
>Type: Distributed-Replicate
>Volume ID: 501741ed-4146-4022-af0b-41f5b1297766
>Status: Started
>Number of Bricks: 12 x 2 = 24
>Transport-type: tcp
>Bricks:
>Brick1: tsunami1:/data/glusterfs/home/brick1
>Brick2: tsunami2:/data/glusterfs/home/brick1
>Brick3: tsunami1:/data/glusterfs/home/brick2
>Brick4: tsunami2:/data/glusterfs/home/brick2
>Brick5: tsunami1:/data/glusterfs/home/brick3
>Brick6: tsunami2:/data/glusterfs/home/brick3
>Brick7: tsunami1:/data/glusterfs/home/brick4
>Brick8: tsunami2:/data/glusterfs/home/brick4
>Brick9: tsunami3:/data/glusterfs/home/brick1
>Brick10: tsunami4:/data/glusterfs/home/brick1
>Brick11: tsunami3:/data/glusterfs/home/brick2
>Brick12: tsunami4:/data/glusterfs/home/brick2
>Brick13: tsunami3:/data/glusterfs/home/brick3
>Brick14: tsunami4:/data/glusterfs/home/brick3
>Brick15: tsunami3:/data/glusterfs/home/brick4
>Brick16: tsunami4:/data/glusterfs/home/brick4
>Brick17: tsunami5:/data/glusterfs/home/brick1
>Brick18: tsunami6:/data/glusterfs/home/brick1
>Brick19: tsunami5:/data/glusterfs/home/brick2
>Brick20: tsunami6:/data/glusterfs/home/brick2
>Brick21: tsunami5:/data/glusterfs/home/brick3
>Brick22: tsunami6:/data/glusterfs/home/brick3
>Brick23: tsunami5:/data/glusterfs/home/brick4
>Brick24: tsunami6:/data/glusterfs/home/brick4
>Options Reconfigured:
>features.default-soft-limit: 95%
>cluster.ensure-durability: off
>performance.cache-size: 512MB
>performance.io-thread-count: 64
>performance.flush-behind: off
>performance.write-behind-window-size: 4MB
>performance.write-behind: on
>nfs.disable: on
>features.quota: on
>cluster.read-hash-mode: 2
>diagnostics.brick-log-level: CRITICAL
>cluster.lookup-unhashed: off
>server.allow-insecure: on
>
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150311/64f8ca48/attachment.html>


More information about the Gluster-users mailing list