[Gluster-devel] about afr

nicolas prochazka prochazka.nicolas at gmail.com
Wed Jan 14 13:29:05 UTC 2009


hello again,
To finish with this issue and information I can send you :
If i stop glusterfsd  ( on server B) before to stop this server ( hard
poweroff by pressed on/off ) , the problem does not occur.  If i hard
poweroff without stop gluster ( a real crash ) problem occur .
Regards
Nicolas Prochazka.

2009/1/14 nicolas prochazka <prochazka.nicolas at gmail.com>

> hi again,
> I continue my tests and :
> In my case, if one file is open on gluster mount during stop of one afr
> server,
> gluster mount can not be acces ( gap ? ) in this server. All other client (
> C for example) which not opening file during stop, isn't affect, i can do a
> ls or open after transport timeout time.
> If i kill the process that's use this file, then i can using gluster mount
> point without problem.
>
>
> Regards,
> Nicolas Prochazka.
>
> 2009/1/12 nicolas prochazka <prochazka.nicolas at gmail.com>
>
>>
>> for your attention,
>> it seems that's this problem occur only when files is open and use and
>> gluster mount point .
>> I use big files of computation ( ~ 10G)  with in the most important part,
>> read. In this case problem occurs.
>> If i using only small files which create only some time, no problem occur,
>> gluster mount can use other afr server.
>>
>> Regards,
>> Nicolas Prochazka
>>
>>
>>
>> 2009/1/12 nicolas prochazka <prochazka.nicolas at gmail.com>
>>
>> Hi,
>>> I'm tryning to set
>>> option transport-timeout 5
>>> in protocol/client
>>>
>>> so a max of 10 seconds before restoring gluster in normal situation ?
>>> no success, i always in the same situation, a 'ls /mnt/gluster'   not
>>> respond after > 10 mins
>>> I can not reuse glustermount exept kill glusterfs process.
>>>
>>> Regards
>>> Nicolas Prochazka
>>>
>>>
>>>
>>> 2009/1/12 Raghavendra G <raghavendra at zresearch.com>
>>>
>>>  Hi Nicolas,
>>>>
>>>> how much time did you wait before concluding the mount point to be not
>>>> working? afr waits for a maximum of (2 * transport-timeout) seconds before
>>>> returning sending reply to the application. Can you wait for some time and
>>>> check out is this the issue you are facing?
>>>>
>>>> regards,
>>>>
>>>> On Mon, Jan 12, 2009 at 7:49 PM, nicolas prochazka <
>>>> prochazka.nicolas at gmail.com> wrote:
>>>>
>>>>> Hi.
>>>>> I've installed this model to test Gluster :
>>>>>
>>>>> + 2 servers ( A B )
>>>>>    - with glusterfsd  server  ( glusterfs--mainline--3.0--patch-842 )
>>>>>    - with glusterfs  client
>>>>> server conf file .
>>>>>
>>>>> + 1 server C only client mode.
>>>>>
>>>>> My issue :
>>>>> If C open big file in this client configuration and then i stop server
>>>>> A (or B )
>>>>> gluster mount point on server C seems to be block, i can not do 'ls
>>>>> -l'  for example.
>>>>> Is a this thing is normal ? as C open his file on A or B , then it is
>>>>> blocking when server down ?
>>>>> I was thinking in client AFR, client can reopen file/block an other
>>>>> server , i'm wrong ?
>>>>> Should use HA translator ?
>>>>>
>>>>> Regards,
>>>>> Nicolas Prochazka.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> volume brickless
>>>>> type storage/posix
>>>>> option directory /mnt/disks/export
>>>>> end-volume
>>>>>
>>>>> volume brick
>>>>> type features/posix-locks
>>>>> option mandatory on          # enables mandatory locking on all files
>>>>> subvolumes brickless
>>>>> end-volume
>>>>>
>>>>> volume server
>>>>> type protocol/server
>>>>> subvolumes brick
>>>>> option transport-type tcp
>>>>> option auth.addr.brick.allow 10.98.98.*
>>>>> end-volume
>>>>> ---------------------------
>>>>>
>>>>> client config
>>>>> *volume brick_10.98.98.1
>>>>> type protocol/client
>>>>> option transport-type tcp/client
>>>>> option remote-host 10.98.98.1
>>>>> option remote-subvolume brick
>>>>> end-volume
>>>>>
>>>>> **volume brick_10.98.98.2
>>>>> type protocol/client
>>>>> option transport-type tcp/client
>>>>> option remote-host 10.98.98.2
>>>>> option remote-subvolume brick
>>>>> end-volume*
>>>>> *
>>>>> volume last
>>>>> type cluster/replicate
>>>>> subvolumes brick_10.98.98.1 **brick_10.98.98.2*
>>>>> *end-volume
>>>>>
>>>>> volume iothreads
>>>>> type performance/io-threads
>>>>> option thread-count 2
>>>>> option cache-size 32MB
>>>>> subvolumes last
>>>>> end-volume
>>>>>
>>>>> volume io-cache
>>>>> type performance/io-cache
>>>>> option cache-size 1024MB             # default is 32MB
>>>>> option page-size  1MB              #128KB is default option
>>>>> option force-revalidate-timeout 2  # default is 1
>>>>> subvolumes iothreads
>>>>> end-volume
>>>>>
>>>>> volume writebehind
>>>>> type performance/write-behind
>>>>> option aggregate-size 256KB # default is 0bytes
>>>>> option window-size 3MB
>>>>> option flush-behind on      # default is 'off'
>>>>> subvolumes io-cache
>>>>> end-volume
>>>>> *
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel at nongnu.org
>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Raghavendra G
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090114/f4386e2a/attachment-0003.html>


More information about the Gluster-devel mailing list