[Gluster-users] glusterfs repication write complete?

可樂我 colacolameme at gmail.com
Fri May 2 02:21:45 UTC 2014


Thanks your help,Kaushal M

I understand that you mean

Thank you very much!


2014-04-30 14:32 GMT+08:00 Kaushal M <kshlmster at gmail.com>:

> My understanding was a little incorrect. AFR (the glusterfs translator
> which actually handles replication), works in the following way.
>
> 1. AFR gets an operation
> 2. AFR sends the operation to all available replicas. This means only
> those replicas which are online. If none are online, AFR returns with error.
> 3. AFR waits for replicas to return with success or failure.
> 4. If all replicas to which the operation was sent return with failure,
> AFR returns with failure. Even if one replica returns with success, AFR
> returns success.
>
> So, to answer your question gluster return an error only if the write
> fails on all available replicas or if none of the replicas are available.
> If the write succeeds on even one replica, but fails on others gluster will
> return success.
>
> Hope this clears your doubts.
>
> ~kaushal
>
>
> On Wed, Apr 30, 2014 at 7:40 AM, 可樂我 <colacolameme at gmail.com> wrote:
>
>> Thanks Kaushal M.
>>
>> When I write file into glusterfs replication volume,
>> if one replica write failure(brink server offline or brick mount point
>> has deleted),
>> it will tell me write failure, this is write operation failure
>>
>> Cloud glusterfs do this?
>> Or I have only way to find it in log file??
>>
>> Thank you very much!!! ^^
>>
>>
>>
>> 2014-04-30 0:35 GMT+08:00 Kaushal M <kshlmster at gmail.com>:
>>
>> A write will return after it completes on all of the replicas that are
>>> up. In your example, you have replica 2, but only one of the bricks is
>>> up. So write will return after it finishes on the up brick. The data
>>> that was written to just one replica will be copied over, or healed to
>>> the other replica when it comes back online. This healing is done via
>>> the self-heal mechanism.
>>>
>>> On Tue, Apr 29, 2014 at 8:25 PM, 可樂我 <colacolameme at gmail.com> wrote:
>>> > Thank your help, Kaushal M
>>> >
>>> > but I have some question about it
>>> >
>>> > I create glusterfs replica volume in different brick in different node
>>> > like this
>>> > #gluster vol create test-vol node-A:/brick1 node-B:/brick2
>>> >
>>> > I mount this volume on /mnt
>>> > #mount.glusterfs node-A:/test-vol /mnt
>>> >
>>> > I write file by using dd cmd
>>> > #dd if=/dev/zero of=/mnt/test-file bs=1M count=2048
>>> >
>>> > i shudown node-B when the dd cmd is executing
>>> >
>>> > wirte test-file is still finish without any error or report any error
>>> msg
>>> >
>>> > if it return only after it is completed on all the replicas.
>>> > I think it would occur some error
>>> >
>>> > I don't understand why it can still finish without error
>>> >
>>> > Thanks!!
>>> >
>>> >
>>> > 2014-04-29 22:38 GMT+08:00 Kaushal M <kshlmster at gmail.com>:
>>> >
>>> >> For a replica volume a write will return only after it is completed on
>>> >> all the replicas.
>>> >>
>>> >> On Tue, Apr 29, 2014 at 8:06 PM, 可樂我 <colacolameme at gmail.com> wrote:
>>> >> > Hi everyone,
>>> >> > I have some questions about replication
>>> >> >
>>> >> > when i write file into glusterfs volume, it is 2 replica volume.
>>> >> > does it return ok when one of the replica write operation complete
>>> or
>>> >> > both
>>> >> > replica write operation complete?
>>> >> >
>>> >> > Can I require it return ok when all replica write complete?
>>> >> >
>>> >> > sorry~ my English is poor
>>> >> > I wish you can understand my qeustions
>>> >> > Thank you very much!
>>> >> > Thanks!
>>> >> >
>>> >> >
>>> >> >
>>> >> > _______________________________________________
>>> >> > Gluster-users mailing list
>>> >> > Gluster-users at gluster.org
>>> >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>> >
>>> >
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140502/aa05254e/attachment.html>


More information about the Gluster-users mailing list