[Gluster-devel] Moratorium on new patch acceptance
Raghavendra G
raghavendra at gluster.com
Tue May 19 12:11:15 UTC 2015
On Tue, May 19, 2015 at 5:40 PM, Raghavendra G <raghavendra at gluster.com>
wrote:
> After discussion with Vijaykumar mallikarjuna and other inputs in this
> thread, we are proposing all quota tests to comply to following criteria:
>
> * use dd always with oflag=append (to make sure there are no parallel
> writes) and conv=fdatasync (to make sure errors, if any are delivered to
> application. Turning off flush-behind is optional since fdatasync acts as a
> barrier)
>
> OR
>
> * turn off write-behind in nfs client and glusterfs server.
>
s/glusterfs server/glusterfs nfs server.
>
> What do you people think is a better test scenario?
>
> Also, we don't have confirmation on the RCA that parallel writes are
> indeed the culprits. We are trying to reproduce the issue locally. @Shyam,
> it would be helpful if you can confirm the hypothesis :).
>
> regards,
> Raghavendra.
>
> On Tue, May 19, 2015 at 5:27 PM, Raghavendra G <raghavendra at gluster.com>
> wrote:
>
>>
>>
>> On Tue, May 19, 2015 at 4:26 PM, Jeff Darcy <jdarcy at redhat.com> wrote:
>>
>>> > No, my suggestion was aimed at not having parallel writes. In this
>>> case quota
>>> > won't even fail the writes with EDQUOT because of reasons explained
>>> above.
>>> > Yes, we need to disable flush-behind along with this so that errors are
>>> > delivered to application.
>>>
>>> Would conv=sync help here? That should prevent any kind of write
>>> parallelism.
>>>
>>
>> An strace of dd shows that
>>
>> * fdatasync is issued only once at the end of all writes when
>> conv=fdatasync
>> * for some strange reason no fsync or fdatasync is issued at all when
>> conv=sync
>>
>> So, using conv=fdatasync in the test cannot prevent write-parallelism
>> induced by write-behind. Parallelism would've been prevented only if dd had
>> issued fdatasync after each write or opened the file with O_SYNC.
>>
>> If it doesn't, I'd say that's a true test failure somewhere in our
>>> stack. A
>>> similar possibility would be to invoke dd multiple times with
>>> oflag=append.
>>>
>>
>> Yes, appending writes curb parallelism (at least in glusterfs, but not
>> sure how nfs client behaves) and hence can be used as an alternative
>> solution.
>>
>> On a slightly unrelated note flush-behind is immaterial in this test
>> since fdatasync is anyways acting as a barrier.
>>
>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Raghavendra G
>>
>
>
>
> --
> Raghavendra G
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150519/f467407c/attachment.html>
More information about the Gluster-devel
mailing list