[Gluster-users] --T files after rebalance
Viktor Villafuerte
viktor.villafuerte at optusnet.com.au
Wed Feb 26 03:14:24 UTC 2014
I should add this is found in logs
---------------------------------------------------------------[ gluster02.uat ]
/var/log/glusterfs/cdn-uat-rebalance.log:[2014-02-26 00:06:38.550396] I [dht-common.c:1017:dht_lookup_everywhere_cbk] 0-cdn-uat-dht: deleting stale linkfile
ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts on cdn-uat-replicate-2
---------------------------------------------------------------[ gluster03.uat ]
/var/log/glusterfs/cdn-uat-rebalance.log:[2014-02-26 00:06:38.556472] E [afr-self-heal-common.c:2212:afr_self_heal_completion_cbk] 0-cdn-uat-replicate-2: background meta-data data entry missing-entry
gfid self-heal failed on ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
also these files are found on the bricks, the actuall Gluster mount
seems to be ok. However I've never seen this in v3.2.5 and the error
above says that smth is not right
volume info here:
[root at gluster08.uat g34]# gluster volume info
Volume Name: cdn-uat
Type: Distributed-Replicate
Volume ID: 3e353d61-ac78-43d4-af20-55d1672a5cd3
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster08.uat:/mnt/gluster/brick01/data
Brick2: gluster07.uat:/mnt/gluster/brick01/data
Brick3: gluster01.uat:/mnt/gluster/brick01/data
Brick4: gluster02.uat:/mnt/gluster/brick01/data
Brick5: gluster03.uat:/mnt/gluster/brick01/data
Brick6: gluster04.uat:/mnt/gluster/brick01/data
Options Reconfigured:
diagnostics.client-log-level: ERROR
[root at gluster08.uat g34]#
On Wed 26 Feb 2014 13:50:40, Viktor Villafuerte wrote:
> Hi all,
>
> I've got these packages installed
>
> [root at gluster04.uat g34]# rpm -qa | grep gluster
> glusterfs-3.4.2-1.el6.x86_64
> glusterfs-cli-3.4.2-1.el6.x86_64
> glusterfs-libs-3.4.2-1.el6.x86_64
> glusterfs-fuse-3.4.2-1.el6.x86_64
> glusterfs-server-3.4.2-1.el6.x86_64
> [root at gluster04.uat g34]#
>
>
> after rebalance I have number of files in 'T'
>
> [root at gluster04.uat g34]# ls -l ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> ---------T 2 1000 1000 0 Feb 26 11:06 ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> [root at gluster04.uat g34]#
>
>
> I've tried this twice once extend 1 (1x1) => 2 (1x1) and once 2 (1x1) =>
> 3 (1x1) and both times I end up with about 1000 files like that one
>
>
> [root at gluster04.uat g34]# getfattr -m trusted.* -d ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> # file: ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> trusted.gfid="�]�U\\�H<���-��"
> trusted.glusterfs.dht.linkto="cdn-uat-replicate-0"
>
> [root at gluster04.uat g34]#
>
>
> which would point to the '0' replica and surely
>
>
> [root at gluster08.uat g34]# ls -l ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> -rw-r--r-- 2 1000 1000 997728 Jan 8 11:14 ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
>
>
>
> Now when I remove the file from '08'
> I get
>
> [root at gluster08.uat g34]# ls -l ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> ls: cannot access ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts: No such file or directory
> [root at gluster08.uat g34]#
>
>
>
> but
>
>
>
> [root at gluster04.uat g34]# getfattr -m trusted.* -d ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> # file: ThePinkPanther2_2009_23_HLS_layer2_642000_95.ts
> trusted.gfid="�]�U\\�H<���-��"
> trusted.glusterfs.dht.linkto="cdn-uat-replicate-0"
>
> [root at gluster04.uat g34]#
>
>
>
>
> I'm sure that this is not by design? Is there a way how to fix this? Or
> what would be the recommended series of action(s) that should be taken
> now to rectify this?
>
>
> v
>
>
> --
> Regards
>
> Viktor Villafuerte
> Optus Internet Engineering
> t: 02 808-25265
--
Regards
Viktor Villafuerte
Optus Internet Engineering
t: 02 808-25265
More information about the Gluster-users
mailing list