[Gluster-Maintainers] [Gluster-devel] Release 3.13.2: Planned for 19th of Jan, 2018

Ravishankar N ravishankar at redhat.com
Fri Jan 19 00:56:08 UTC 2018

On 01/19/2018 06:19 AM, Shyam Ranganathan wrote:
> On 01/18/2018 07:34 PM, Ravishankar N wrote:
>> On 01/18/2018 11:53 PM, Shyam Ranganathan wrote:
>>> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote:
>>>> Hi,
>>>> As release 3.13.1 is announced, here is are the needed details for
>>>> 3.13.2
>>>> Release date: 19th Jan, 2018 (20th is a Saturday)
>>> Heads up, this is tomorrow.
>>>> Tracker bug for blockers:
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2
>>> The one blocker bug has had its patch merged, so I am assuming there are
>>> no more that should block this release.
>>> As usual, shout out in case something needs attention.
>> Hi Shyam,
>> 1. There is one patch https://review.gluster.org/#/c/19218/ which
>> introduces full locks for afr writevs. We're introducing this as a
>> GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13
>> branch today. Karthik, please back port the patch.
> Do we need this behind an option, if existing behavior causes split
> brains?
Yes this is for split-brain prevention. Arbiter volumes already take 
full locks but not normal replica volumes. This is for normal replica 
volumes. See Pranith's comment in 
> Or is the option being added for workloads that do not have
> multiple clients or clients writing to non-overlapping regions (and thus
> need not suffer a penalty in performance maybe? But they should not
> anyway as a single client and AFR eager locks should ensure this is done
> only once for the lifetime of the file being accesses, right?)
Yes, single writers take eager lock which is always a full lock 
regardless of this change.
> Basically I would like to keep options out it possible in backports, as
> that changes the gluster op-version and involves other upgrade steps to
> be sure users can use this option etc. Which means more reading and
> execution of upgrade steps for our users. Hence the concern!
>> 2. I'm also backporting https://review.gluster.org/#/c/18571/. Please
>> consider merging it too today if it is ready.
> This should be fine.
>> We will attach the relevant BZs to the tracker bug.
>> Thanks
>> Ravi
>>>> Shyam
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel

More information about the maintainers mailing list