<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 21, 2018 at 9:47 AM, Amye Scavarda <span dir="ltr"><<a href="mailto:amye@redhat.com" target="_blank">amye@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">It may be more effective to email the direct parties, I know that I filter out mailing lists and don't always see this in time. <div>Given as this is somewhat time critical and we'll need to get release notes out shortly, suggest taking it to direct emails.</div></div></blockquote><div><br></div><div>I added individual owners to CC list. For some reason, they are not reflected in CC list. But, I guess they would've received direct mails.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>- amye </div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Tue, Feb 20, 2018 at 8:11 PM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr">From 'git log release-3.13..release-4.0' I see following patches that might've an impact on performance:<br><br>commit a32ff73c06e1e14589817b1701c1c8<wbr>d0f05aaa04<br>Author: Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>Date: Mon Jan 29 10:23:52 2018 +0530<br><br> glusterd: optimize glusterd import volumes code path<br> <br> In case there's a version mismatch detected for one of the volumes<br> glusterd was ending up with updating all the volumes which is a<br> overkill.<br> <br> >mainline patch : <a href="https://review.gluster.org/#/c/19358/" target="_blank">https://review.gluster.org/#/c<wbr>/19358/</a><br> <br> Change-Id: I6df792db391ce3a1697cfa9260f7d<wbr>bc3f59aa62d<br> BUG: 1540554<br> Signed-off-by: Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br> (cherry picked from commit bb34b07fd2ec5e6c3eed4fe0cdf334<wbr>79dbf5127b)<br><br>commit ea972d9f5c9b318429c228108c21a3<wbr>34b4acd95c<br>Author: Sakshi Bansal <<a href="mailto:sabansal@redhat.com" target="_blank">sabansal@redhat.com</a>><br>Date: Mon Jan 22 14:38:17 2018 +0530<br><br> dentry fop serializer: added new server side xlator for dentry fop serialization<br> <br> Problems addressed by this xlator :<br> <br> [1]. To prevent race between parallel mkdir,mkdir and lookup etc.<br> <br> Fops like mkdir/create, lookup, rename, unlink, link that happen on a<br> particular dentry must be serialized to ensure atomicity.<br> <br> Another possible case can be a fresh lookup to find existance of a path<br> whose gfid is not set yet. Further, storage/posix employs a ctime based<br> heuristic 'is_fresh_file' (interval time is less than 1 second of current<br> time) to check fresh-ness of file. With serialization of these two fops<br> (lookup & mkdir), we eliminate the race altogether.<br> <br> [2]. Staleness of dentries<br> <br> This causes exponential increase in traversal time for any inode in the<br> subtree of the directory pointed by stale dentry.<br> <br> Cause : Stale dentry is created because of following two operations:<br> <br> a. dentry creation due to inode_link, done during operations like<br> lookup, mkdir, create, mknod, symlink, create and<br> b. dentry unlinking due to various operations like rmdir, rename,<br> unlink.<br> <br> The reason is __inode_link uses __is_dentry_cyclic, which explores<br> all possible path to avoid cyclic link formation during inode<br> linkage. __is_dentry_cyclic explores stale-dentry(ies) and its<br> all ancestors which is increases traversing time exponentially.<br> <br> Implementation : To acheive this all fops on dentry must take entry locks<br> before they proceed, once they have acquired locks, they perform the fop<br> and then release the lock.<br> <br> Some documentation from email conversation:<br> [1] <a href="http://www.gluster.org/pipermail/gluster-devel/2015-December/047314.html" target="_blank">http://www.gluster.org/piperma<wbr>il/gluster-devel/2015-December<wbr>/047314.html</a><br> <br> [2] <a href="http://www.gluster.org/pipermail/gluster-devel/2015-August/046428.html" target="_blank">http://www.gluster.org/piperma<wbr>il/gluster-devel/2015-August/<wbr>046428.html</a><br> <br> With this patch, the feature is optional, enable it by running:<br> <br> `gluster volume set $volname features.sdfs enable`<br><br> Change-Id: I6e80ba3cabfa6facd5dda63bd482b<wbr>9bf18b6b79b<br> Fixes: #397<br> Signed-off-by: Sakshi Bansal <<a href="mailto:sabansal@redhat.com" target="_blank">sabansal@redhat.com</a>><br> Signed-off-by: Amar Tumballi <<a href="mailto:amarts@redhat.com" target="_blank">amarts@redhat.com</a>><br> Signed-off-by: Sunny Kumar <<a href="mailto:sunkumar@redhat.com" target="_blank">sunkumar@redhat.com</a>><br><br><br>commit 24bf7715140586675f8d2036f4d589<wbr>bc255c16dc<br>Author: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br>Date: Tue Jan 9 17:26:44 2018 +0530<br><br> md-cache: Implement dynamic configuration of xattr list for caching<br> <br> Currently, the list of xattrs that md-cache can cache is hard coded<br> in the md-cache.c file, this necessiates code change and rebuild<br> everytime a new xattr needs to be added to md-cache xattr cache<br> list.<br> <br> With this patch, the user will be able to configure a comma<br> seperated list of xattrs to be cached by md-cache<br> <br> Updates #297<br> <br> Change-Id: Ie35ed607d17182d53f6bb6e6c6563<wbr>ac52bc3132e<br> Signed-off-by: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br><br>commit efc30e60e233164bd4fe7fc903a7c5<wbr>f718b0448b<br>Author: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br>Date: Tue Jan 9 10:32:16 2018 +0530<br><br> upcall: Allow md-cache to specify invalidations on xattr with wildcard<br> <br> Currently, md-cache sends a list of xattrs, it is inttrested in recieving<br> invalidations for. But, it cannot specify any wildcard in the xattr names<br> Eg: user.* - invalidate on updating any xattr with user. prefix.<br> <br> This patch, enable upcall to honor wildcard in the xattr key names<br> <br> Updates: #297<br> <br> Change-Id: I98caf0ed72f11ef10770bf2067d44<wbr>28880e0a03a<br> Signed-off-by: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br><br>commit 8fc9c6a8fc7c73b2b4c65a8ddbe988<wbr>bca10e89b6<br>Author: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br>Date: Thu Jan 4 19:38:05 2018 +0530<br><br> posix: In getxattr, honor the wildcard '*'<br> <br> Currently, the posix_xattr_fill performas a sys_getxattr<br> on all the keys requested, there are requirements where<br> the keys could contain a wildcard, in which case sys_getxattr<br> would return ENODATA, eg: if the xattr requested is user.*<br> all the xattrs with prefix user. should be returned, with their<br> values.<br> <br> This patch, changes posix_xattr_fill, to honor wildcard in the keys<br> requested.<br> <br> Updates #297<br> <br> Change-Id: I3d52da2957ac386fca3c156e26ff4<wbr>cdf0b2c79a9<br> Signed-off-by: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br><br>commit 84c5c540b26c8f3dcb9845344dd48d<wbr>f063e57845<br>Author: karthik-us <<a href="mailto:ksubrahm@redhat.com" target="_blank">ksubrahm@redhat.com</a>><br>Date: Wed Jan 17 17:30:06 2018 +0530<br><br> cluster/afr: Adding option to take full file lock<br> <br> Problem:<br> In replica 3 volumes there is a possibilities of ending up in split<br> brain scenario, when multiple clients writing data on the same file<br> at non overlapping regions in parallel.<br> <br> Scenario:<br> - Initially all the copies are good and all the clients gets the value<br> of data readables as all good.<br> - Client C0 performs write W1 which fails on brick B0 and succeeds on<br> other two bricks.<br> - C1 performs write W2 which fails on B1 and succeeds on other two bricks.<br> - C2 performs write W3 which fails on B2 and succeeds on other two bricks.<br> - All the 3 writes above happen in parallel and fall on different ranges<br> so afr takes granular locks and all the writes are performed in parallel.<br> Since each client had data-readables as good, it does not see<br> file going into split-brain in the in_flight_split_brain check, hence<br> performs the post-op marking the pending xattrs. Now all the bricks<br> are being blamed by each other, ending up in split-brain.<br> <br> Fix:<br> Have an option to take either full lock or range lock on files while<br> doing data transactions, to prevent the possibility of ending up in<br> split brains. With this change, by default the files will take full<br> lock while doing IO. If you want to make use of the old range lock<br> change the value of "cluster.full-lock" to "no".<br> <br> Change-Id: I7893fa33005328ed63daa2f7c35ee<wbr>ed7c5218962<br> BUG: 1535438<br> Signed-off-by: karthik-us <<a href="mailto:ksubrahm@redhat.com" target="_blank">ksubrahm@redhat.com</a>><br><br>commit 2db7872d5251d98d47c262ff269776<wbr>bfae2d4fb9<br>Author: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br>Date: Mon Aug 7 11:24:46 2017 +0530<br><br> md-cache: Serve nameless lookup from cache<br> <br> Updates #232<br> Change-Id: I97e92312a53a50c2d1660bf8d6572<wbr>01fc05a76eb<br> Signed-off-by: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br><br><div class="gmail_extra">commit 78d67da17356b48cf1d5a659576465<wbr>0d5b200ba7<br>Author: Sunil Kumar Acharya <<a href="mailto:sheggodu@redhat.com" target="_blank">sheggodu@redhat.com</a>><br>Date: Thu Mar 23 12:50:41 2017 +0530<br><br> cluster/ec: OpenFD heal implementation for EC<br> <br> Existing EC code doesn't try to heal the OpenFD to<br> avoid unnecessary healing of the data later.<br> <br> Fix implements the healing of open FDs before<br> carrying out file operations on them by making an<br> attempt to open the FDs on required up nodes.<br> <br> BUG: 1431955<br> Change-Id: Ib696f59c41ffd8d5678a484b23a00<wbr>bb02764ed15<br> Signed-off-by: Sunil Kumar Acharya <<a href="mailto:sheggodu@redhat.com" target="_blank">sheggodu@redhat.com</a>><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">commit 14dbd5da1cae64e6d4d2c69966e198<wbr>44d090ce98<br>Author: Niklas Hambüchen <<a href="mailto:mail@nh2.me" target="_blank">mail@nh2.me</a>><br>Date: Fri Dec 29 15:49:13 2017 +0100<br><br> glusterfind: Speed up gfid lookup 100x by using an SQL index<br> <br> Fixes #1529883.<br> <br> This fixes some bits of `glusterfind`'s horrible performance,<br> making it 100x faster.<br> <br> Until now, glusterfind was, for each line in each CHANGELOG.* file,<br> linearly reading the entire contents of the sqlite database in<br> 4096-bytes-sized pread64() syscalls when executing the<br> <br> SELECT COUNT(1) FROM %s WHERE 1=1 AND gfid = ?<br> <br> query through the code path:<br> <br> get_changes()<br> parse_changelog_to_db()<br> when_data_meta()<br> gfidpath_exists()<br> _exists()<br> <br> In a quick benchmark on my laptop, doing one such `SELECT` query<br> took ~75ms on a 10MB-sized sqlite DB, while doing the same query<br> with an index took < 1ms.<br> <br> Change-Id: I8e7fe60f1f45a06c102f56b54d2ea<wbr>d9e0377794e<br> BUG: 1529883<br> Signed-off-by: Niklas Hambüchen <<a href="mailto:mail@nh2.me" target="_blank">mail@nh2.me</a>><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">commit c96a1338fe8139d07a0aa1bc40f084<wbr>3d033f0324<br>Author: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br>Date: Wed Dec 6 07:59:53 2017 +0530<br><br> cluster/ec: Change [f]getxattr to parallel-dispatch-one<br> <br> At the moment in EC, [f]getxattr operations wait to acquire a lock<br> while other operations are in progress even when it is in the same mount with a<br> lock on the file/directory. This happens because [f]getxattr operations<br> follow the model where the operation is wound on 'k' of the bricks and are<br> matched to make sure the data returned is same on all of them. This consistency<br> check requires that no other operations are on-going while [f]getxattr<br> operations are wound to the bricks. We can perform [f]getxattr in<br> another way as well, where we find the good_mask from the lock that is already<br> granted and wind the operation on any one of the good bricks and unwind the<br> answer after adjusting size/blocks to the parent xlator. Since we are taking<br> into account good_mask, the reply we get will either be before or after a<br> possible on-going operation. Using this method, the operation doesn't need to<br> depend on completion of on-going operations which could be taking long time (In<br> case of some slow disks and writes are in progress etc). Thus we reduce the<br> time to serve [f]getxattr requests.<br> <br> I changed [f]getxattr to dispatch-one and added extra logic in<br> ec_link_has_lock_conflict() to not have any conflicts for fops with<br> EC_MINIMUM_ONE as fop->minimum to achieve the effect described above.<br> Modified scripts to make sure READ fop is received in EC to trigger heals.<br> <br> Updates gluster/glusterfs#368<br> Change-Id: I3b4ebf89181c336b7b8d5471b0454<wbr>f016cdaf296<br> Signed-off-by: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br><br></div><div class="gmail_extra">commit e255385ae4f4c8a883b3fb96baceba<wbr>4b143828da<br>Author: Csaba Henk <<a href="mailto:csaba@redhat.com" target="_blank">csaba@redhat.com</a>><br>Date: Fri Nov 10 20:33:20 2017 +0100<br><br> write-behind: Allow trickling-writes to be configurable<br> <br> This is the undisputed/trivial part of Shreyas' patch<br> he attached to <a href="https://bugzilla.redhat.com/1364740" target="_blank">https://bugzilla.redhat.com/13<wbr>64740</a> (of<br> which the current bug is a clone).<br> <br> We need more evaluation for the page_size and window_size<br> bits before taking them on.<br> <br> Change-Id: Iaa0b9a69d35e522b77a52a09acef4<wbr>7460e8ae3e9<br> BUG: 1428060<br> Co-authored-by: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br> Signed-off-by: Csaba Henk <<a href="mailto:csaba@redhat.com" target="_blank">csaba@redhat.com</a>><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br>commit c26cadd31dfa128c4ec6883f69d654<wbr>813f351018<br>Author: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br>Date: Fri Jun 30 12:52:21 2017 +0530<br><br> quick-read: Integrate quick read with upcall and increase cache time<br> <br> Fixes : #261<br> Co-author: Subha sree Mohankumar <<a href="mailto:smohanku@redhat.com" target="_blank">smohanku@redhat.com</a>><br> Change-Id: Ie9dd94e86459123663b9b200d9294<wbr>0625ef68eab<br> Signed-off-by: Poornima G <<a href="mailto:pgurusid@redhat.com" target="_blank">pgurusid@redhat.com</a>><br></div><br><div class="gmail_extra">commit d95db5505a9cb923e61ccd23d28b45<wbr>ceb07b716f<br>Author: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br>Date: Thu Sep 7 15:34:58 2017 -0700<br><br> md-cache: Cache statfs calls<br> <br> Summary:<br> - This gives md-cache to cache statfs calls<br> - You can turn it on or off via 'gluster vol set groot performance.md-cache-statfs <on|off>'<br> <br> Change-Id: I664579e3c19fb9a6cd9d7b3a0eae0<wbr>61f70f4def4<br> BUG: 1523295<br> Signature: t1:4652632:1488581841:111cc01e<wbr>fe83c71f1e98d075abb10589c45747<wbr>05<br> Reviewed-on: <a href="https://review.gluster.org/18228" target="_blank">https://review.gluster.org/182<wbr>28</a><br> Reviewed-by: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br> CentOS-regression: Gluster Build System <<a href="mailto:jenkins@build.gluster.org" target="_blank">jenkins@build.gluster.org</a>><br> Smoke: Gluster Build System <<a href="mailto:jenkins@build.gluster.org" target="_blank">jenkins@build.gluster.org</a>><br> Signed-off-by: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br><br>commit 430484c92ab5a6234958d1143e0bb1<wbr>4aeb0cd1c0<br>Author: Mohit Agrawal <<a href="mailto:moagrawa@redhat.com" target="_blank">moagrawa@redhat.com</a>><br>Date: Fri Oct 20 12:39:29 2017 +0530<br><br> glusterfs: Use gcc builtin ATOMIC operator to increase/decreate refcount.<br> <br> Problem: In glusterfs code base we call mutex_lock/unlock to take<br> reference/dereference for a object.Sometime it could be<br> reason for lock contention also.<br> <br> Solution: There is no need to use mutex to increase/decrease ref<br> counter, instead of using mutex use gcc builtin ATOMIC<br> operation.<br> <br> Test: I have not observed yet how much performance gain after apply<br> this patch specific to glusterfs but i have tested same<br> with below small program(mutex and atomic both) and<br> get good difference.<br></div><div class="gmail_extra"><br></div><div class="gmail_extra"> Change-Id: Ie5030a52ea264875e002e108dd4b2<wbr>07b15ab7cc7<br> Signed-off-by: Mohit Agrawal <<a href="mailto:moagrawa@redhat.com" target="_blank">moagrawa@redhat.com</a>><br><br>commit f9b6174a7f5eb6475ca9780b062bfb<wbr>3ff1132b2d<br>Author: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br>Date: Mon Apr 10 12:36:21 2017 -0700<br><br> posix: Add option to disable nftw() based deletes when purging the landfill directory<br> <br> Summary:<br> - We may have found an issue where certain directories were being moved into .landfill and then being quickly purged via nftw().<br> - We would like to have an emergency option to disable these purges.<br> <br> > Reviewed-on: <a href="https://review.gluster.org/18253" target="_blank">https://review.gluster.org/182<wbr>53</a><br> > Reviewed-by: Shreyas Siravara <<a href="mailto:sshreyas@fb.com" target="_blank">sshreyas@fb.com</a>><br> <br> Fixes #371<br> <br> Change-Id: I90b54c535930c1ca2925a92872819<wbr>9b6b80eadd9<br> Signed-off-by: Amar Tumballi <<a href="mailto:amarts@redhat.com" target="_blank">amarts@redhat.com</a>><br><br>commit 59d1cc720f52357f7a6f20bb630feb<wbr>c6a622c99c<br>Author: Raghavendra G <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>><br>Date: Tue Sep 19 09:44:55 2017 +0530<br><br> cluster/dht: populate inode in dentry for single subvolume dht<br> <br> ... in readdirp response if dentry points to a directory inode. This<br> is a special case where the entire layout is stored in one single<br> subvolume and hence no need for lookup to construct the layout<br> <br> Change-Id: I44fd951e2393ec9dac2af120469be<wbr>47081a32185<br> BUG: 1492625<br> Signed-off-by: Raghavendra G <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>><br><br>commit e785faead91f74dce7c832848f2e8f<wbr>3f43bd0be5<br>Author: Raghavendra G <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>><br>Date: Mon Sep 18 16:01:34 2017 +0530<br><br> cluster/dht: don't overfill the buffer in readdir(p)<br> <br> Superflous dentries that cannot be fit in the buffer size provided by<br> kernel are thrown away by fuse-bridge. This means,<br> <br> * the next readdir(p) seen by readdir-ahead would have an offset of a<br> dentry returned in a previous readdir(p) response. When readdir-ahead<br> detects non-monotonic offset it turns itself off which can result in<br> poor readdir performance.<br> <br> * readdirp can be cpu-intensive on brick and there is no point to read<br> all those dentries just to be thrown away by fuse-bridge.<br> <br> So, the best strategy would be to fill the buffer optimally - neither<br> overfill nor underfill.<br> <br> Change-Id: Idb3d85dd4c08fdc4526b2df801d49<wbr>e69e439ba84<br> BUG: 1492625<br> Signed-off-by: Raghavendra G <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>><br><br>commit 4ad64ffe8664cc0b964586af6efcf5<wbr>3cc619b68a<br>Author: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br>Date: Fri Nov 17 07:20:21 2017 +0530<br><br> ec: Use tiebreaker_inodelk where necessary<br> <br> When there are big directories or files that need to be healed,<br> other shds are stuck on getting lock on self-heal domain for these<br> directories/files. If there is a tie-breaker logic, other shds<br> can heal some other files/directories while 1 of the shds is healing<br> the big file/directory.<br> <br> Before this patch:<br> 96.67 4890.64 us 12.89 us 646115887.30us 340869 INODELK<br> After this patch:<br> 40.76 42.35 us 15.09 us 6546.50us 438478 INODELK<br> <br> Fixes gluster/glusterfs#354<br> Change-Id: Ia995b5576b44f770c064090705c78<wbr>459e543cc64<br> Signed-off-by: Pranith Kumar K <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>><br><br>commit 3f8d118e48f11f448f35aca0c48ad4<wbr>0e0fd34f5b<br>Author: Xavier Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>><br>Date: Tue Nov 7 13:45:03 2017 +0100<br><br> libglusterfs/atomic: Improved atomic support<br> <br> This patch solves a detection problem in <a href="http://configure.ac" target="_blank">configure.ac</a> that prevented<br> that compilation detects builtin __atomic or __sync functions.<br> <br> It also adds more atomic types and support for other atomic functions.<br> <br> An special case has been added to support 64-bit atomics on 32-bit<br> systems. The solution is to fallback to the mutex solution only for<br> 64-bit atomics, but smaller atomic types will still take advantage<br> of builtins if available.<br> <br> Change-Id: I6b9afc7cd6e66b28a332787155835<wbr>52872278801<br> BUG: 1510397<br> Signed-off-by: Xavier Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>><br><br>commit 0dcd5b2feeeec7c29bd2454d6ad950<wbr>d094d02b0f<br>Author: Xavier Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>><br>Date: Mon Oct 16 13:57:59 2017 +0200<br><br> cluster/ec: create eager-lock option for non-regular files<br> <br> A new option is added to allow independent configuration of eager<br> locking for regular files and non-regular files.<br> <br> Change-Id: I8f80e46d36d8551011132b15c0fac<wbr>549b7fb1c60<br> BUG: 1502610<br> Signed-off-by: Xavier Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>><br><br></div><div class="gmail_extra">Apart from these commits there are also some patches which aid concurrency in the code. I've left them out since performance benefits are not measured and doesn't affect the users directly. If you feel these have to be added please let me know. Some changes are:</div><div class="gmail_extra">* Patches from Zhang Huan <<a href="mailto:zhanghuan@open-fs.com" target="_blank">zhanghuan@open-fs.com</a>> aimed to reduce lock contention in rpc layer and while accessing fdtable,<br></div><div class="gmail_extra">* Patches from Milind Changire <<a href="mailto:mchangir@redhat.com" target="_blank">mchangir@redhat.com</a>> while accessing programs in rpcsvc.</div><div class="gmail_extra"><br></div><div class="gmail_extra">From the commits listed above, I see that following components are affected and I've listed owners for updating a short summary of changes along with the component</div><div class="gmail_extra">* glusterd: optimize glusterd import volumes code path - Atin<br></div><div class="gmail_quote">* md-cache - Shreyas and Poornima</div><div class="gmail_quote">* EC - Xavi and Pranith (I see that pranith already sent an update. So I guess this is covered)</div><div class="gmail_quote">* Improvements to consumption of Atomic Builtins - Xavi and Mohit</div><div class="gmail_quote">* Improvements to glusterfind - Niklas Hambüchen, Milind and Aravinda V K</div><div class="gmail_quote">* Modification of Quick-read to consume upcall notifications - Poornima</div><div class="gmail_quote">* Exposing trickling-writes in write-behind - Csaba and Shreyas</div><div class="gmail_quote">* Changes to Purging landfill directory in storage/posix - Shreyas</div><div class="gmail_quote">* Adding option to full file lock in afr - Karthick Subramanya</div><div class="gmail_quote">* readdirplus enhancements in DHT - Raghavendra Gowdappa</div><div class="gmail_quote">* Dentry Fop Serializer - Raghavendra Gowdappa and Amar<br></div><div class="gmail_quote"><br></div><div class="gmail_quote">Please send out patches updating "Performance" section of release notes. If you think your patch need not be mentioned in relase notes too, please send an explicit nack so that we'll know.</div><div class="gmail_quote"><br></div><div class="gmail_quote">If I've left out any fixes, please point them out. If not, only subset of changes listed above will have a mention in "performance" section of release notes.</div><div><div class="m_-7353759648012851872h5"><div class="gmail_quote"><br></div><div class="gmail_quote">On Tue, Feb 20, 2018 at 7:59 AM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">+gluster-devel.<div><div class="m_-7353759648012851872m_7170223657350179011gmail-h5"><br><div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 20, 2018 at 7:35 AM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div>All,<br><br></div>I am trying to come up with content for release notes for 4.0 summarizing performance impact. Can you point me to patches/documentation/issues/b<wbr>ugs that could impact performance in 4.0? Better still, if you can give me a summary of changes having performance impact, it would be really be helpful.<br></div><div><br></div><div>I see that Pranith had responded with this link:</div><div><a href="https://review.gluster.org/#/c/19535/3/doc/release-notes/4.0.0.md" target="_blank">https://review.gluster.org/#/c<wbr>/19535/3/doc/release-notes/4.0<wbr>.0.md</a></div><div><br></div>regards,<br></div>Raghavendra<br></div>
</blockquote></div><br></div></div></div></div></div>
</blockquote></div><div class="gmail_extra"><br></div></div></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-devel</a><span class="HOEnZb"><font color="#888888"><br></font></span></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div class="m_-7353759648012851872gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Amye Scavarda | <a href="mailto:amye@redhat.com" target="_blank">amye@redhat.com</a> | Gluster Community Lead</div></div>
</font></span></div>
</blockquote></div><br></div></div>