[Gluster-users] Vol full of .*.gfs* after migrate-data

shylesh shylesh at gluster.com
Mon Jan 2 07:42:47 UTC 2012


On Monday 02 January 2012 05:05 AM, Dan Bretherton wrote:
>> On 03/10/11 19:08, Dan Bretherton wrote:
>>>
>>> On 02/10/11 02:12, Amar Tumballi wrote:
>>>> Dan,
>>>>
>>>> Answer inline.
>>>>
>>>> On 02-Oct-2011, at 1:26 AM, Dan 
>>>> Bretherton<d.a.bretherton at reading.ac.uk>  wrote:
>>>>
>>>>> Hello All,
>>>>> I have been testing rebalance...migrate-data in GlusterFS version 
>>>>> 3.2.3, following add-brick and fix-layout.  After migrate-data the 
>>>>> the volume is 97% full with some bricks being 100% full.  I have 
>>>>> not added any files to the volume so there should be an amount of 
>>>>> free space at least as big as the new bricks that were added.  
>>>>> However, it seems as if all the extra space has been taken up with 
>>>>> files matching the pattern .*.gfs*.  I presume these are temporary 
>>>>> files used for the transfer real files, which should have been 
>>>>> renamed once the transfers were completed and verified, and the 
>>>>> original versions deleted.  The new bricks contain mostly these 
>>>>> temporary files, and zero byte link files pointing to the 
>>>>> corresponding real files on other bricks.  An example of such a 
>>>>> pair is shown below.
>>>>>
>>>>> ---------T 1 root root 0 Sep 30 03:14 
>>>>> /mnt/local/glusterfs/root/backup/behemoth_system/bin
>>>>> -rwxr-xr-x 1 root root 60416 Sep 30 18:20 
>>>>> /mnt/local/glusterfs/root/backup/behemoth_system/bin/.df.gfs60416
>>>>>
>>>>> Is this a known bug, and is there a work-around?  If not, is it 
>>>>> safe to delete the .*.gfs* files so I can at least use the volume?
>>>>>
>>>> This is not a known issue but surely seems like a bug. If the 
>>>> source file is intact you can delete the temp file to get the space 
>>>> back. Also if md5sum is same, you can rename temp file to original, 
>>>> so you get space in existing bricks.
>>>>
>>>> Regards,
>>>> Amar
>>>>
>>>>
>>>>> Regards
>>>>> Dan Bretherton
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>> Amar- Thanks for the information and the patch.  The 
>>> etc-glusterd-mount-<volname>.log file can be downloaded from here:
>>>
>>> http://www.nerc-essc.ac.uk/~dab/etc-glusterd-mount-backup.log.tar.gz
>>>
>>> I am using CentOS 5.5 by the way.
>>>
>>> -Dan.
>>>
>> Hello again-
>> I tested the patch and I confirm that it works; there are no *.gfs* 
>> files in my volume after performing a migrate-data operation.  
>> However there is still something not quite right.  One of the 
>> replicated brick pairs is 100% full, whereas the others are 
>> approximately 50% full.  I would have expected all the bricks to 
>> contain roughly the same amount of data after migrate-data, and this 
>> effect is mainly what I want to use migrate-data for.  Do you why 
>> this might have happened or how to avoid it?  The log files from the 
>> latest migrate-data operation can be downloaded from here:
>>
>> http://www.nerc-essc.ac.uk/~dab/backup_migrate-data_logs.tar.gz
>>
>> -Dan.
>
> Hello Amar and gluster-users,
>
> I have tested rebalance...migrate-data in version 3.2.5 and found 
> three serious problems still present.
>
> 1) There are lots of *.gfs* files after migrate-data.  This didn't 
> happen when I tested the patched version of 3.2.4.
> 2) There are lots of duplicate files after migrate-data, i.e. lots of 
> files seen twice at the mount point.  I have never seen this happen 
> before, and I would really like to know how to repair the volume.  
> There are ~6000 duplicates out of a total of ~1 million files in the 
> volume, so dealing with each one individually would be impractical.
> 3) A lot of files have wrong permissions after migrate-data.  For 
> example, -rwsr-xr-x commonly becomes -rwxr-xr-x, and -rw-rw-r-- 
> commonly becomes -rw-r--r--.
>
> Are these known problems, and if so is there a new version with fixes 
> in the pipeline?
>
> Regards
> Dan.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Hi Dan,


I tried the following steps
----------------------------

1. Created a replicate volume
2. filled the volume with 100000   files
3. Added two more bricks with very less space to make sure out of space 
condition occurs (now volume type is distributed-replicate).
4. After i start the rebalance, once the newly added bricks were full, 
log messages were showing "out of disk space" messages, but migration 
didn't happened.

Now on the mount point i could not see any *.gfs* files, and permissions 
for these files were same even after rebalance.


Pleases let me know if i am missing something.


Thanks,
Shylesh




More information about the Gluster-users mailing list