[Gluster-users] deleted files make bricks full ?

Tomoaki Sato tsato at valinux.co.jp
Mon Aug 29 00:24:59 UTC 2011


Krishna,

I've reproduce the issue with a new 4 brick VMs, four-1-private through four-4-private.

case #1: mount the same nfs server(four-1-private) and delete the file ==> OK
case #2: mount another nfs server(four-2-private) and delete the file ==> NG

followings are command log:

<case #1>

[root at vhead-010 ~]# mount four-1-private:/four /mnt
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-1-private:/four 412849280 269468800 143380480  66% /mnt
[root at vhead-010 ~]# dd if=/dev/zero of=/mnt/1GB bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB) copied, 20.2477 seconds, 50.6 MB/s
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-1-private:/four 412849280 270469760 142379520  66% /mnt
[root at vhead-010 ~]# ls -l /mnt/1GB
-rw-r--r-- 1 root root 1024000000 Aug 29 08:38 /mnt/1GB
[root at vhead-010 ~]# umount /mnt
[root at vhead-010 ~]# mount four-1-private:/four /mnt
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-1-private:/four 412849280 270469760 142379520  66% /mnt
[root at vhead-010 ~]# ls -l /mnt/1GB
-rw-r--r-- 1 root root 1024000000 Aug 29 08:38 /mnt/1GB
[root at vhead-010 ~]# rm /mnt/1GB
rm: remove regular file `/mnt/1GB'? y
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-1-private:/four 412849280 269468800 143380480  66% /mnt
[root at vhead-010 ~]#

and <case#2>

[root at vhead-010 ~]# dd if=/dev/zero of=/mnt/1GB bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB) copied, 20.1721 seconds, 50.8 MB/s
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-1-private:/four 412849280 270469760 142379520  66% /mnt
[root at vhead-010 ~]# ls -l /mnt/1GB
-rw-r--r-- 1 root root 1024000000 Aug 29 08:40 /mnt/1GB
[root at vhead-010 ~]# umount /mnt
[root at vhead-010 ~]# mount four-2-private:/four /mnt
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-2-private:/four 412849280 270469760 142379520  66% /mnt
[root at vhead-010 ~]# ls -l /mnt/1GB
-rw-r--r-- 1 root root 1024000000 Aug 29 08:40 /mnt/1GB
[root at vhead-010 ~]# rm /mnt/1GB
rm: remove regular file `/mnt/1GB'? y
[root at vhead-010 ~]# df /mnt
Filesystem           1K-blocks      Used Available Use% Mounted on
four-2-private:/four 412849280 270469760 142379520  66% /mnt

Best,

tomo

(2011/08/24 16:23), Krishna Srinivas wrote:
> Tomo,
>
> can you not use the dns name "small" and instead use one of the IP
> addresses? Also use the same IP address each time you NFS remount and
> see how the behavior is? (you can use IP address of small-1-4-private
> everytime you remount the NFS)
>
> Thanks
> Krishna
>
> On Wed, Aug 24, 2011 at 2:59 AM, Tomoaki Sato<tsato at valinux.co.jp>  wrote:
>> Krishna
>>
>> Yes, I'm using DNS round robin for "small".
>>
>> Thanks,
>> tomo
>>
>> (2011/08/23 18:43), Krishna Srinivas wrote:
>>>
>>> Tomo,
>>> Are you using DNS round robin for "small" ?
>>> Thanks
>>> Krishna
>>>
>>> On Mon, Aug 22, 2011 at 12:10 PM, Tomoaki Sato<tsato at valinux.co.jp>
>>>   wrote:
>>>>
>>>> Shejar,
>>>>
>>>> Where can I see updates on this issue ?
>>>> Bugzilla ?
>>>>
>>>> Thanks,
>>>> tomo
>>>>   (2011/08/17 15:06), Shehjar Tikoo wrote:
>>>>>
>>>>> Thanks for providing the exact steps. This is a bug. We're on it.
>>>>>
>>>>> -Shehjar
>>>>>
>>>>> Tomoaki Sato wrote:
>>>>>>
>>>>>> a simple way to reproduce the issue:
>>>>>> 1) NFS mount and create 'foo' and umount.
>>>>>> 2) NFS mount and delete 'foo' and umount.
>>>>>> 3) replete 1) 2) till ENOSPC.
>>>>>>
>>>>>> command logs are following:
>>>>>> [root at vhead-010 ~]# rpm -qa | grep gluster
>>
>>




More information about the Gluster-users mailing list