[Gluster-users] Broken AFR - DU / DF - part 2

Stas Oskin stas.oskin at gmail.com
Thu Apr 2 10:18:24 UTC 2009


Sorry, forgot about other details.

* How I got there - just let the cluster fill out by our test data, then our
auto-clean mechanism kicked in as the free space dropped below 5%, and
started to erase the old test data.

I have the feeling that the space has ended here as well, because the
auto-clean might not have run in time in order to catch it, which basically
triggered this behavior (as during the other two times).
Any idea what error I should be looking in the logs? What GlusterFS reports
if the space has ended?

Btw, currently the used space is dropping down :). Meaning the gap between
the used, and actually reported space by cluster is increasing.

The used OS for all servers is CentOS 5.2. Servers run x86, client runs
x86_64 - don't think it's doing any difference.

Fuse is the latest one from repository - 2.7.4. I didn't use yet the tuned
version from GlusterFS.

Regards.


2009/4/2 Stas Oskin <stas.oskin at gmail.com>

> GlusterFS 2 rc7
>
> Server vol:
> ## Reference volume "home1" from remote server
> volume home1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.41      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> ## Reference volume "home2" from remote server
> volume home2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.42      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> volume home
>  type cluster/afr
>  option metadata-self-heal on
>  subvolumes home1 home2
> end-volume
>
> Client vol:
> ## Reference volume "home1" from remote server
> volume home1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.42      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> ## Reference volume "home2" from remote server
> volume home2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.41      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> volume home
>  type cluster/afr
>  option metadata-self-heal on
>  option favorite-child home1
>  subvolumes home1 home2
> end-volume
>
> volume writebehind
>   type performance/write-behind
>   option aggregate-size 128KB
>   option window-size 1MB
>   subvolumes home
> end-volume
>
> volume cache
>   type performance/io-cache
>   option cache-size 512MB
>   subvolumes writebehind
> end-volume
>
> 2009/4/2 Steve <steeeeeveee at gmx.net>
>
> > There are no 0 size files this time though.
>> >
>> How did you managed to get there? What version of GlusterFS are you using?
>> What are the vol files? What kernel and what fuse version are you using?
>> --
>> Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen:
>> http://www.gmx.net/de/go/multimessenger01
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090402/5dc398e1/attachment.html>


More information about the Gluster-users mailing list