[Gluster-users] Update from GlusterFS project (November -2018)
atumball at redhat.com
Tue Dec 11 04:05:26 UTC 2018
The overall focus was to stabilize the product, and most of the patches
which landed were in this direction.
There are more ‘smoke’ jobs added, to make sure we have tools to
identify the common issues.
ASan focused memory leak findings are underway, so we are expecting to
have much cleaner stack for next version.
Client side inode LRU limit work is underway, so that glusterfs clients
can be tuned to work under certain limits.
More stabilization of brick-mux use case, where volumes would be created
/ deleted more frequently (Container usecase).
Ctime is also now default, which means, some of the application which
used to fail on high available setup would now work fine.
Work was underway to reduce the number of threads in brick process,
specially when there are more volumes in the system, and brick-multiplexing
*Bugzilla Update / Github issues:*
At the beginning of the November, we had close to 800 bugs which were not
closed (ie, NEW/ASSIGNED/POST/MODIFIED states). By end of November, that
list is shrinked to ~610.
For current bug count check report @ https://red.ht/2Url7gj
Note that, this is also because of bulk movement of RFE bugs to github
Other than that, we have taken decision to mark the bugs as CLOSED -
NEXTRELEASE when the patch which has ‘fixes: bz#NNNN’ gets merged, the
script which bulk closes the bugs after release happens, to CLOSED -
CURRENTRELEASE to make sure the bugzilla status is properly maintained.
About Github issues, there was a slow rate of triaging, and also due to
bulk RFE migration from bugzilla, we now have more than 330 issues.
*Focus areas for December / January*
Most of the december, the plan is to work on proposed list of activities
for GlusterFS 6.0 release.
The github issues which we are working on can be found at
For those who may not click on the link, some key highlights are:
More focus on cleanup sequence, and memory corruption/leak issues around
Scalability of number of volumes
Client side inode lru limit, which should allow us to control client
Reflink and copy-file-range support.
Fencing support for virtual block files hosted on glusterfs.
Development API cleanup, and re-org
This should allow us to host translators outside of glusterfs
Can allow different vendors, developers to have their own
translators, which they can add in the graph by placing them in the
Slow ‘ls’ (directory listing speed)
Evaluation of locked regions, and see what we can improve.
Plan for nightly performance runs of upstream bits, so we don’t
regress a lot from some patches.
Amar, Shyam, Xavi.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users