[Gluster-users] Read from fastest node only
David Cunningham
dcunningham at voisonics.com
Tue Aug 10 21:02:15 UTC 2021
Thanks Ravi. That was my understanding - that the opening of the file is
checked for health with all nodes, and then the file is actually read from
one node. I think it's clear that checking with all nodes when opening the
file for a read can't be avoided at the moment.
On Tue, 10 Aug 2021 at 22:32, Ravishankar N <ranaraya at redhat.com> wrote:
>
>
> On Tue, Aug 10, 2021 at 3:23 PM David Cunningham <
> dcunningham at voisonics.com> wrote:
>
>> Thanks Ravi, so if I understand correctly latency to all the nodes
>> remains an issue on all file reads.
>>
>>
> Hi David, yes, but only for the lookup and opening of the fd. Once the fd
> is open, all readv calls will go only to the chosen brick.
> -Ravi
>
>
>>
>> On Tue, 10 Aug 2021 at 16:49, Ravishankar N <ranaraya at redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Aug 10, 2021 at 8:07 AM David Cunningham <
>>> dcunningham at voisonics.com> wrote:
>>>
>>>> Hi Gionatan,
>>>>
>>>> Thanks for that reply. Under normal circumstances there would be
>>>> nothing that needs to be healed, but how can local-node know this is really
>>>> the case without checking the other nodes?
>>>>
>>>> If using local-node tells GlusterFS not to check other nodes for the
>>>> health of the file at all then this sounds exactly like what we're looking
>>>> for, although only for a GlusterFS node that is also a client. My
>>>> understanding is that local-node isn't applicable to a machine that only
>>>> has the client.
>>>>
>>>> Does anyone know definitively what is the case here? If not I guess we
>>>> would need to test it.
>>>>
>>>
>>>
>>> Knowledge about the file's health is maintained in-memory by AFR xlator
>>> on each gluster client (irrespective of where it is mounted). This info is
>>> computed during lookup (lookups are always sent to all replica copies)
>>> which is issued before any data operation (read, write, etc). See
>>> https://docs.gluster.org/en/latest/Administrator-Guide/Automatic-File-Replication/#read-transactions
>>> .
>>>
>>> Regards,
>>> Ravi
>>>
>>>
>>>> Thank you.
>>>>
>>>> On Thu, 5 Aug 2021 at 07:28, Gionatan Danti <g.danti at assyoma.it> wrote:
>>>>
>>>>> Il 2021-08-03 19:51 Strahil Nikolov ha scritto:
>>>>> > The difference between thin and usual arbiter is that the thin
>>>>> arbiter
>>>>> > takes in action only when it's needed (one of the data bricks is
>>>>> down)
>>>>> > , so the thin arbiter's lattency won't affect you as long as both
>>>>> data
>>>>> > bricks are running.
>>>>> >
>>>>> > Keep in mind that thin arbiter is less used. For example, I have
>>>>> never
>>>>> > deployed a thin arbiter.
>>>>>
>>>>> Maybe I am horribly wrong, but local-node reads should *not* involve
>>>>> other nodes in any manner - ie: no checksum or voting is done for
>>>>> read.
>>>>> AFR hashing should spread different files to different nodes when
>>>>> doing
>>>>> striping, but for mirroring any node should have a valid copy of the
>>>>> requested data.
>>>>>
>>>>> So when using choose-local all reads which can really be local (ie:
>>>>> the
>>>>> requested file is available) should not suffer from remote party
>>>>> latency.
>>>>> Is that correct?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> --
>>>>> Danti Gionatan
>>>>> Supporto Tecnico
>>>>> Assyoma S.r.l. - www.assyoma.it
>>>>> email: g.danti at assyoma.it - info at assyoma.it
>>>>> GPG public key ID: FF5F32A8
>>>>>
>>>>
>>>>
>>>> --
>>>> David Cunningham, Voisonics Limited
>>>> http://voisonics.com/
>>>> USA: +1 213 221 1092
>>>> New Zealand: +64 (0)28 2558 3782
>>>>
>>>
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>>
>
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210811/8df22c79/attachment.html>
More information about the Gluster-users
mailing list