[Gluster-users] DF reports incorrect sizes
Stas Oskin
stas.oskin at gmail.com
Sun Mar 29 22:20:29 UTC 2009
Hi.
I found out, that even if I set a favorite child, erase the data from the
problematic server, and re-run ls- lR, it still doesn't replicates all the
data to the second server.
I think there is a definitely some bug with AFR.
Regards.
2009/3/29 Stas Oskin <stas.oskin at gmail.com>
> Freeing some space, and running ls -lR, doesn't help.
>
> Regards.
>
> 2009/3/29 Stas Oskin <stas.oskin at gmail.com>
>
> Hi.
>>
>> After erasing all the data from my lab setup, and restarting all, it
>> happened again in less then 5 hours.
>>
>> Here is what I see:
>>
>> Client:
>> df -h: glusterfs 31G 29G 0 100% /mnt/media
>>
>> Server 1:
>> df -h: /dev/hda4 31G 29G 0 100% /media
>>
>> Server 2:
>> df -h: /dev/hda4 31G 20G 8.7G 70% /media
>>
>> This means that again the server lost each other.
>>
>> Perhaps it's related to the fact that the space go filled out.
>>
>> Any idea how to diagnose it?
>>
>>
>> Regards.
>>
>> 2009/3/26 Stas Oskin <stas.oskin at gmail.com>
>>
>>> Hi.
>>>
>>> It occurs that 1 of the 2 AFR volumes is not synchronized.
>>>
>>> Meaning erasing or creating files on mounts performed only on 1 node -
>>> but the free space reported from the both nodes.
>>>
>>> Any idea what's went wrong?
>>>
>>> Regards.
>>>
>>>
>>> 2009/3/26 Stas Oskin <stas.oskin at gmail.com>
>>>
>>>> Hi.
>>>>
>>>> Same as advised on this list, see below.
>>>>
>>>> By the way, I restarted both the clients and servers, and the reported
>>>> size is still the same.
>>>> Whichever it is, it stuck quite persistently :).
>>>>
>>>> server.vol
>>>>
>>>> volume home1
>>>> type storage/posix # POSIX FS translator
>>>> option directory /media/storage # Export this directory
>>>> end-volume
>>>>
>>>> volume posix-locks-home1
>>>> type features/posix-locks
>>>> option mandatory-locks on
>>>> subvolumes home1
>>>> end-volume
>>>>
>>>> ### Add network serving capability to above home.
>>>> volume server
>>>> type protocol/server
>>>> option transport-type tcp
>>>> subvolumes posix-locks-home1
>>>> option auth.addr.posix-locks-home1.allow * # Allow access to "home1"
>>>> volume
>>>> end-volume
>>>>
>>>>
>>>> client.vol
>>>>
>>>> ## Reference volume "home1" from remote server
>>>> volume home1
>>>> type protocol/client
>>>> option transport-type tcp/client
>>>> option remote-host 192.168.253.41 # IP address of remote host
>>>> option remote-subvolume posix-locks-home1 # use home1 on remote
>>>> host
>>>> option transport-timeout 10 # value in seconds; it should be
>>>> set relatively low
>>>> end-volume
>>>>
>>>> ## Reference volume "home2" from remote server
>>>> volume home2
>>>> type protocol/client
>>>> option transport-type tcp/client
>>>> option remote-host 192.168.253.42 # IP address of remote host
>>>> option remote-subvolume posix-locks-home1 # use home1 on remote
>>>> host
>>>> option transport-timeout 10 # value in seconds; it should be
>>>> set relatively low
>>>> end-volume
>>>>
>>>> volume home
>>>> type cluster/afr
>>>> option metadata-self-heal on
>>>> subvolumes home1 home2
>>>> end-volume
>>>>
>>>> volume writebehind
>>>> type performance/write-behind
>>>> option aggregate-size 128KB
>>>> option window-size 1MB
>>>> subvolumes home
>>>> end-volume
>>>>
>>>> volume cache
>>>> type performance/io-cache
>>>> option cache-size 512MB
>>>> subvolumes writebehind
>>>> end-volume
>>>>
>>>>
>>>> Regards.
>>>>
>>>> 2009/3/26 Vikas Gorur <vikas at zresearch.com>
>>>>
>>>> 2009/3/26 Stas Oskin <stas.oskin at gmail.com>:
>>>>> > Hi.
>>>>> >
>>>>> > We erased all the data from our mount point, but the df still reports
>>>>> > it's almost full:
>>>>> >
>>>>> > glusterfs 31G 27G 2.5G 92% /mnt/glusterfs
>>>>> >
>>>>> > Running du either in the mount point, or in the back-end directory,
>>>>> > reports 914M.
>>>>> >
>>>>> > How do we get the space back?
>>>>>
>>>>> What is your client and server configuration?
>>>>>
>>>>> Vikas
>>>>> --
>>>>> Engineer - Z Research
>>>>> http://gluster.com/
>>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090330/0d196adb/attachment.html>
More information about the Gluster-users
mailing list