[Gluster-devel] AFR problem with 2.0rc4

Gordan Bobic gordan at bobich.net
Thu Mar 19 11:48:18 UTC 2009


That's unavoidable to some extent, since the first server is the one that
is authoritative for locking. That means that all reads have to make a hit
on the 1st server, even if the data then gets retrieved from another server
in the cluster. Whether that explains all of the disparity you are seing, I
don't know.

Gordan

On Thu, 19 Mar 2009 12:40:23 +0100, nicolas prochazka
<prochazka.nicolas at gmail.com> wrote:
> i understand that, but in this case, i have an other problem :
> it seems that's load balancing between subvolumes does not work very
well,
> the first server in subvolumes list is very often use compare to other
> server ( in read ) = > so
> i 've big ressource network usage and this first server, not in second .
> 
> nicolas
> 
> On Thu, Mar 19, 2009 at 12:08 PM, Gordan Bobic <gordan at bobich.net> wrote:
>> On Thu, 19 Mar 2009 16:25:21 +0530, Vikas Gorur <vikas at zresearch.com>
>> wrote:
>>> 2009/3/19 Gordan Bobic <gordan at bobich.net>:
>>>> On Thu, 19 Mar 2009 16:14:18 +0530, Vikas Gorur <vikas at zresearch.com>
>>>> wrote:
>>>>> 2009/3/19 Gordan Bobic <gordan at bobich.net>:
>>>>>> How does this affect adding new servers into an existing cluster?
>>>>>
>>>>> Adding a new server will work --- as and when files are accessed, new
>>>>> extended attributes will be written.
>>>>
>>>> And presumably, permanently removing servers should also work the same
>>>> way?
>>>> I'm only asking because I had a whole array of weird spurious problems
>>>> before when I removed a server and added a new server at the same
time.
>>>
>>> Removing a server might not work so seamlessly, since the new client
>>> will expect smaller size extended attributes whereas the older files
>>> will have bigger ones. IIRC, this was the source of the errors you
>>> faced ("Numerical result out of range"). Fixes for this are on the
>>> way.
>>
>> Ah, OK, that makes sense. Thanks for clearing it up.
>>
>> Now if just the lockup on udev creation (root on glusterfs) in rc4 and
>> the
>> big memory leak I reported get sorted out, I'll have a working system.
;)
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>





More information about the Gluster-devel mailing list