[Gluster-users] Vol full of .*.gfs* after migrate-data
Dan Bretherton
d.a.bretherton at reading.ac.uk
Mon Oct 10 18:08:57 UTC 2011
On 03/10/11 19:08, Dan Bretherton wrote:
>
> On 02/10/11 02:12, Amar Tumballi wrote:
>> Dan,
>>
>> Answer inline.
>>
>> On 02-Oct-2011, at 1:26 AM, Dan
>> Bretherton<d.a.bretherton at reading.ac.uk> wrote:
>>
>>> Hello All,
>>> I have been testing rebalance...migrate-data in GlusterFS version
>>> 3.2.3, following add-brick and fix-layout. After migrate-data the
>>> the volume is 97% full with some bricks being 100% full. I have not
>>> added any files to the volume so there should be an amount of free
>>> space at least as big as the new bricks that were added. However,
>>> it seems as if all the extra space has been taken up with files
>>> matching the pattern .*.gfs*. I presume these are temporary files
>>> used for the transfer real files, which should have been renamed
>>> once the transfers were completed and verified, and the original
>>> versions deleted. The new bricks contain mostly these temporary
>>> files, and zero byte link files pointing to the corresponding real
>>> files on other bricks. An example of such a pair is shown below.
>>>
>>> ---------T 1 root root 0 Sep 30 03:14
>>> /mnt/local/glusterfs/root/backup/behemoth_system/bin
>>> -rwxr-xr-x 1 root root 60416 Sep 30 18:20
>>> /mnt/local/glusterfs/root/backup/behemoth_system/bin/.df.gfs60416
>>>
>>> Is this a known bug, and is there a work-around? If not, is it safe
>>> to delete the .*.gfs* files so I can at least use the volume?
>>>
>> This is not a known issue but surely seems like a bug. If the source
>> file is intact you can delete the temp file to get the space back.
>> Also if md5sum is same, you can rename temp file to original, so you
>> get space in existing bricks.
>>
>> Regards,
>> Amar
>>
>>
>>> Regards
>>> Dan Bretherton
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> Amar- Thanks for the information and the patch. The
> etc-glusterd-mount-<volname>.log file can be downloaded from here:
>
> http://www.nerc-essc.ac.uk/~dab/etc-glusterd-mount-backup.log.tar.gz
>
> I am using CentOS 5.5 by the way.
>
> -Dan.
>
Hello again-
I tested the patch and I confirm that it works; there are no *.gfs*
files in my volume after performing a migrate-data operation. However
there is still something not quite right. One of the replicated brick
pairs is 100% full, whereas the others are approximately 50% full. I
would have expected all the bricks to contain roughly the same amount of
data after migrate-data, and this effect is mainly what I want to use
migrate-data for. Do you why this might have happened or how to avoid
it? The log files from the latest migrate-data operation can be
downloaded from here:
http://www.nerc-essc.ac.uk/~dab/backup_migrate-data_logs.tar.gz
-Dan.
More information about the Gluster-users
mailing list