[Gluster-users] Rebalance issue on 3.5.3

Alessandro Ipe Alessandro.Ipe at meteo.be
Tue Mar 10 14:30:33 UTC 2015


Hi,


I launched a couple a days ago a rebalance on my gluster distribute-replicate volume 
(see below) through its CLI, while allowing my users to continue using the volume.

Yesterday, they managed to fill completely the volume. It now results in unavailable 
files on the client (using fuse) with the message "Transport endpoint is not 
connected". Investigating  to associated files on the bricks, I noticed that these are 
displayed with ls -l as 
---------T 2 user group 0 Jan 15 22:00 file
Performing a 
ls -lR /data/glusterfs/home/brick1/* | grep -F -- "---------T"
on a single brick gave me a LOT of files in that above-mentioned state.

Why are the files in that state ?

Did I lose all these files or can they still be recovered from the replicate copy of 
another brick ?


Regards,


Alessandro.


gluster volume info home output:
Volume Name: home
Type: Distributed-Replicate
Volume ID: 501741ed-4146-4022-af0b-41f5b1297766
Status: Started
Number of Bricks: 12 x 2 = 24
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/home/brick1
Brick2: tsunami2:/data/glusterfs/home/brick1
Brick3: tsunami1:/data/glusterfs/home/brick2
Brick4: tsunami2:/data/glusterfs/home/brick2
Brick5: tsunami1:/data/glusterfs/home/brick3
Brick6: tsunami2:/data/glusterfs/home/brick3
Brick7: tsunami1:/data/glusterfs/home/brick4
Brick8: tsunami2:/data/glusterfs/home/brick4
Brick9: tsunami3:/data/glusterfs/home/brick1
Brick10: tsunami4:/data/glusterfs/home/brick1
Brick11: tsunami3:/data/glusterfs/home/brick2
Brick12: tsunami4:/data/glusterfs/home/brick2
Brick13: tsunami3:/data/glusterfs/home/brick3
Brick14: tsunami4:/data/glusterfs/home/brick3
Brick15: tsunami3:/data/glusterfs/home/brick4
Brick16: tsunami4:/data/glusterfs/home/brick4
Brick17: tsunami5:/data/glusterfs/home/brick1
Brick18: tsunami6:/data/glusterfs/home/brick1
Brick19: tsunami5:/data/glusterfs/home/brick2
Brick20: tsunami6:/data/glusterfs/home/brick2
Brick21: tsunami5:/data/glusterfs/home/brick3
Brick22: tsunami6:/data/glusterfs/home/brick3
Brick23: tsunami5:/data/glusterfs/home/brick4
Brick24: tsunami6:/data/glusterfs/home/brick4
Options Reconfigured:
features.default-soft-limit: 95%
cluster.ensure-durability: off
performance.cache-size: 512MB
performance.io-thread-count: 64
performance.flush-behind: off
performance.write-behind-window-size: 4MB
performance.write-behind: on
nfs.disable: on
features.quota: on
cluster.read-hash-mode: 2
diagnostics.brick-log-level: CRITICAL
cluster.lookup-unhashed: off
server.allow-insecure: on


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150310/b75d33d1/attachment.html>


More information about the Gluster-users mailing list