[Gluster-users] Rebalance issue on 3.5.3

Alessandro Ipe Alessandro.Ipe at meteo.be
Thu Mar 12 17:58:29 UTC 2015


Hi,


The extended attrbitutes are (according to the brick number)
1. # file: data/glusterfs/home/brick1/aipe/.xinitrc.template
trusted.gfid=0x67bf3db057474c0a892f459b6c622ee8
trusted.glusterfs.dht.linkto=0x686f6d652d7265706c69636174652d3500
trusted.pgfid.c7ee612b-0dfe-4832-9efe-531040c696fd=0x00000001

2. # file: data/glusterfs/home/brick1/aipe/.xinitrc.template
trusted.gfid=0x67bf3db057474c0a892f459b6c622ee8
trusted.glusterfs.dht.linkto=0x686f6d652d7265706c69636174652d3500
trusted.pgfid.c7ee612b-0dfe-4832-9efe-531040c696fd=0x00000001

Stat'ing these two gives me 0-size file

3. # file: data/glusterfs/home/brick2/aipe/.xinitrc.template
trusted.afr.home-client-10=0x000000000000000000000000
trusted.afr.home-client-11=0x000000000000000000000000
trusted.gfid=0x67bf3db057474c0a892f459b6c622ee8
trusted.glusterfs.quota.c7ee612b-0dfe-4832-9efe-531040c696fd.contri=0x000000000
0000600
trusted.pgfid.c7ee612b-0dfe-4832-9efe-531040c696fd=0x00000001

4. # file: data/glusterfs/home/brick2/aipe/.xinitrc.template
trusted.afr.home-client-10=0x000000000000000000000000
trusted.afr.home-client-11=0x000000000000000000000000
trusted.gfid=0x67bf3db057474c0a892f459b6c622ee8
trusted.glusterfs.quota.c7ee612b-0dfe-4832-9efe-531040c696fd.contri=0x000000000
0000600
trusted.pgfid.c7ee612b-0dfe-4832-9efe-531040c696fd=0x00000001

These two are non-0-size files.


Thanks,


A.


On Wednesday 11 March 2015 07:51:59 Joe Julian wrote:


Those files are dht link files. Check out the extended attributes, "getfattr -m . -d" 



On March 10, 2015 7:30:33 AM PDT, Alessandro Ipe <Alessandro.Ipe at meteo.be> 
wrote:
Hi,


I launched a couple a days ago a rebalance on my gluster distribute-replicate volume 
(see below) through its CLI, while allowing my users to continue using the volume.

Yesterday, they managed to fill completely the volume. It now results in unavailable 
files on the client (using fuse) with the message "Transport endpoint is not 
connected". Investigating  to associated files on the bricks, I noticed that these are 
displayed with ls -l as 
---------T 2 user group 0 Jan 15 22:00 file
Performing a 
ls -lR /data/glusterfs/home/brick1/* | grep -F -- "---------T"
on a single brick gave me a LOT of files in that above-mentioned state.

Why are the files in that state ?

Did I lose all these files or can they still be recovered from the replicate copy of 
another brick ?


Regards,


Alessandro.


gluster volume info home output:
Volume Name: home
Type: Distributed-Replicate
Volume ID: 501741ed-4146-4022-af0b-41f5b1297766
Status: Started
Number of Bricks: 12 x 2 = 24
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/home/brick1
Brick2: tsunami2:/data/glusterfs/home/brick1
Brick3: tsunami1:/data/glusterfs/home/brick2
Brick4: tsunami2:/data/glusterfs/home/brick2
Brick5: tsunami1:/data/glusterfs/home/brick3
Brick6: tsunami2:/data/glusterfs/home/brick3
Brick7: tsunami1:/data/glusterfs/home/brick4
Brick8: tsunami2:/data/glusterfs/home/brick4
Brick9: tsunami3:/data/glusterfs/home/brick1
Brick10: tsunami4:/data/glusterfs/home/brick1
Brick11: tsunami3:/data/glusterfs/home/brick2
Brick12: tsunami4:/data/glusterfs/home/brick2
Brick13: tsunami3:/data/glusterfs/home/brick3
Brick14: tsunami4:/data/glusterfs/home/brick3
Brick15: tsunami3:/data/glusterfs/home/brick4
Brick16: tsunami4:/data/glusterfs/home/brick4
Brick17: tsunami5:/data/glusterfs/home/brick1
Brick18: tsunami6:/data/glusterfs/home/brick1
Brick19: tsunami5:/data/glusterfs/home/brick2
Brick20: tsunami6:/data/glusterfs/home/brick2
Brick21: tsunami5:/data/glusterfs/home/brick3
Brick22: tsunami6:/data/glusterfs/home/brick3
Brick23: tsunami5:/data/glusterfs/home/brick4
Brick24: tsunami6:/data/glusterfs/home/brick4
Options Reconfigured:
features.default-soft-limit: 95%
cluster.ensure-durability: off
performance.cache-size: 512MB
performance.io-thread-count: 64
performance.flush-behind: off
performance.write-behind-window-size: 4MB
performance.write-behind: on
nfs.disable: on
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150312/99979a11/attachment.html>


More information about the Gluster-users mailing list