[Gluster-users] deleted files make bricks full ?

Krishna Srinivas krishna at gluster.com
Fri Sep 16 08:16:07 UTC 2011


Tomo, We will get back on this.
Thanks
Krishna

On Mon, Aug 29, 2011 at 5:54 AM, Tomoaki Sato <tsato at valinux.co.jp> wrote:
> Krishna,
>
> I've reproduce the issue with a new 4 brick VMs, four-1-private through
> four-4-private.
>
> case #1: mount the same nfs server(four-1-private) and delete the file ==>
> OK
> case #2: mount another nfs server(four-2-private) and delete the file ==> NG
>
> followings are command log:
>
> <case #1>
>
> [root at vhead-010 ~]# mount four-1-private:/four /mnt
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-1-private:/four 412849280 269468800 143380480  66% /mnt
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/1GB bs=1MB count=1024
> 1024+0 records in
> 1024+0 records out
> 1024000000 bytes (1.0 GB) copied, 20.2477 seconds, 50.6 MB/s
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-1-private:/four 412849280 270469760 142379520  66% /mnt
> [root at vhead-010 ~]# ls -l /mnt/1GB
> -rw-r--r-- 1 root root 1024000000 Aug 29 08:38 /mnt/1GB
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# mount four-1-private:/four /mnt
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-1-private:/four 412849280 270469760 142379520  66% /mnt
> [root at vhead-010 ~]# ls -l /mnt/1GB
> -rw-r--r-- 1 root root 1024000000 Aug 29 08:38 /mnt/1GB
> [root at vhead-010 ~]# rm /mnt/1GB
> rm: remove regular file `/mnt/1GB'? y
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-1-private:/four 412849280 269468800 143380480  66% /mnt
> [root at vhead-010 ~]#
>
> and <case#2>
>
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/1GB bs=1MB count=1024
> 1024+0 records in
> 1024+0 records out
> 1024000000 bytes (1.0 GB) copied, 20.1721 seconds, 50.8 MB/s
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-1-private:/four 412849280 270469760 142379520  66% /mnt
> [root at vhead-010 ~]# ls -l /mnt/1GB
> -rw-r--r-- 1 root root 1024000000 Aug 29 08:40 /mnt/1GB
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# mount four-2-private:/four /mnt
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-2-private:/four 412849280 270469760 142379520  66% /mnt
> [root at vhead-010 ~]# ls -l /mnt/1GB
> -rw-r--r-- 1 root root 1024000000 Aug 29 08:40 /mnt/1GB
> [root at vhead-010 ~]# rm /mnt/1GB
> rm: remove regular file `/mnt/1GB'? y
> [root at vhead-010 ~]# df /mnt
> Filesystem           1K-blocks      Used Available Use% Mounted on
> four-2-private:/four 412849280 270469760 142379520  66% /mnt
>
> Best,
>
> tomo
>
> (2011/08/24 16:23), Krishna Srinivas wrote:
>>
>> Tomo,
>>
>> can you not use the dns name "small" and instead use one of the IP
>> addresses? Also use the same IP address each time you NFS remount and
>> see how the behavior is? (you can use IP address of small-1-4-private
>> everytime you remount the NFS)
>>
>> Thanks
>> Krishna
>>
>> On Wed, Aug 24, 2011 at 2:59 AM, Tomoaki Sato<tsato at valinux.co.jp>  wrote:
>>>
>>> Krishna
>>>
>>> Yes, I'm using DNS round robin for "small".
>>>
>>> Thanks,
>>> tomo
>>>
>>> (2011/08/23 18:43), Krishna Srinivas wrote:
>>>>
>>>> Tomo,
>>>> Are you using DNS round robin for "small" ?
>>>> Thanks
>>>> Krishna
>>>>



More information about the Gluster-users mailing list