[Gluster-users] Fwd: DF reports incorrect sizes

Stas Oskin stas.oskin at gmail.com
Sun Mar 29 22:21:55 UTC 2009


The issue is that 1 of the 2 AFR volumes is not synchronized.

Meaning erasing or creating files on mounts performed only on 1 node - but
the free space reported from the both nodes.

Any idea what's went wrong?

Regards.


2009/3/26 Stas Oskin <stas.oskin at gmail.com>

> Hi.
>
> Same as advised on this list, see below.
>
> By the way, I restarted both the clients and servers, and the reported size
> is still the same.
> Whichever it is, it stuck quite persistently :).
>
> server.vol
>
> volume home1
>  type storage/posix                   # POSIX FS translator
>  option directory /media/storage        # Export this directory
> end-volume
>
> volume posix-locks-home1
>  type features/posix-locks
>  option mandatory-locks on
>  subvolumes home1
> end-volume
>
> ### Add network serving capability to above home.
> volume server
>  type protocol/server
>  option transport-type tcp
>  subvolumes posix-locks-home1
>  option auth.addr.posix-locks-home1.allow * # Allow access to "home1"
> volume
> end-volume
>
>
> client.vol
>
> ## Reference volume "home1" from remote server
> volume home1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.41      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> ## Reference volume "home2" from remote server
> volume home2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.42      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> volume home
>  type cluster/afr
>  option metadata-self-heal on
>  subvolumes home1 home2
> end-volume
>
> volume writebehind
>   type performance/write-behind
>   option aggregate-size 128KB
>   option window-size 1MB
>   subvolumes home
> end-volume
>
> volume cache
>   type performance/io-cache
>   option cache-size 512MB
>   subvolumes writebehind
> end-volume
>
>
> Regards.
>
> 2009/3/26 Vikas Gorur <vikas at zresearch.com>
>
> 2009/3/26 Stas Oskin <stas.oskin at gmail.com>:
>> > Hi.
>> >
>> > We erased all the data from our mount point, but the df still reports
>> > it's almost full:
>> >
>> > glusterfs 31G 27G 2.5G 92% /mnt/glusterfs
>> >
>> > Running du either in the mount point, or in the back-end directory,
>> > reports 914M.
>> >
>> > How do we get the space back?
>>
>> What is your client and server configuration?
>>
>> Vikas
>> --
>> Engineer - Z Research
>> http://gluster.com/
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090330/304a27ef/attachment.html>


More information about the Gluster-users mailing list