[Gluster-devel] Moratorium on new patch acceptance
Raghavendra Gowdappa
rgowdapp at redhat.com
Tue May 19 08:25:13 UTC 2015
----- Original Message -----
> From: "Vijay Bellur" <vbellur at redhat.com>
> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Shyam" <srangana at redhat.com>
> Cc: gluster-devel at gluster.org
> Sent: Tuesday, May 19, 2015 1:29:57 PM
> Subject: Re: [Gluster-devel] Moratorium on new patch acceptance
>
> On 05/19/2015 12:21 PM, Raghavendra Gowdappa wrote:
> >
> >
>
> >>> Yes, this is a possible scenario. There is a finite time window between,
> >>>
> >>> 1. Querying the size of a directory. In other words checking whether
> >>> current
> >>> write can be allowed
> >>> 2. The "effect" of this write getting reflected in size of all the parent
> >>> directories of a file till root
> >>>
> >>> If 1 and 2 were atomic, another parallel write which could've exceed the
> >>> quota-limit could not have slipped through. Unfortunately, in the current
> >>> scheme of things they are not atomic. Now there can be parallel writes in
> >>> this test case because of nfs-client and/or glusterfs write-back (though
> >>> we've one single threaded application - dd - running). One way of testing
> >>> this hypothesis is to disable nfs and glusterfs write-back and run the
> >>> same
> >>> (unmodified) test and the test should succeed always (dd should fail). To
> >>> disable write-back in nfs you can use noac option while mounting.
> >>>
> >>> The situation becomes worse in real-life scenarios because of parallelism
> >>> involved at many layers:
> >>>
> >>> 1. multiple applications, each possibly being multithreaded writing to
> >>> possibly many/or single file(s) in a quota subtree
> >>> 2. write-back in NFS-client and glusterfs
> >>> 3. Multiple bricks holding files of a quota-subtree. Each brick
> >>> processing
> >>> simultaneously many write requests through io-threads.
> >>
> >> 4. Background accounting of directory sizes _after_ a write is complete.
> >>
> >>>
> >>> I've tried in past to fix the issue, though unsuccessfully. It seems to
> >>> me
> >>> that one effective strategy is to make enforcement and updation of size
> >>> of
> >>> parents atomic. But if we do that we end up adding latency of accounting
> >>> to
> >>> latency of fop. Other options can be explored. But, our Quota
> >>> functionality
> >>> requirements allow a buffer of 10% while enforcing limits. So, this issue
> >>> has not been high on our priority list till now. So, our tests should
> >>> also
> >>> expect failures allowing for this 10% buffer.
> >
> > Since most of our tests are a single instance of single threaded dd running
> > on a single mount, if the hypothesis turns out true, we can turn off
> > nfs-client and glusterfs write-back in all tests related to Quota.
> > Comments?
> >
>
> Even with write-behind enabled, dd should get a failure upon close() if
> quota were to return EDQUOT for any of the writes. I suspect that
> flush-behind being enabled by default in write-behind can mask a failure
> for close(). Disabling flush-behind in the tests might take care of
> fixing the tests.
No, my suggestion was aimed at not having parallel writes. In this case quota won't even fail the writes with EDQUOT because of reasons explained above. Yes, we need to disable flush-behind along with this so that errors are delivered to application.
>
> It would be good to have nfs + quota coverage in the tests. So let us
> not disable nfs tests for quota.
The suggestion was to continue using nfs, but preventing nfs-clients from using a write-back cache.
>
> Thanks,
> Vijay
>
>
More information about the Gluster-devel
mailing list