[Gluster-users] Update from GlusterFS project (November -2018)

Amar Tumballi atumball at redhat.com
Tue Dec 11 04:05:26 UTC 2018


   -

   *Key Highlights:*


The overall focus was to stabilize the product, and most of the patches
which landed were in this direction.


   -

   There are more ‘smoke’ jobs added, to make sure we have tools to
   identify the common issues.
   -

   ASan focused memory leak findings are underway, so we are expecting to
   have much cleaner stack for next version.
   -

   Client side inode LRU limit work is underway, so that glusterfs clients
   can be tuned to work under certain limits.
   -

   More stabilization of brick-mux use case, where volumes would be created
   / deleted more frequently (Container usecase).
   -

   Ctime is also now default, which means, some of the application which
   used to fail on high available setup would now work fine.
   -

   Work was underway to reduce the number of threads in brick process,
   specially when there are more volumes in the system, and brick-multiplexing
   is enabled.




   -

   *Bugzilla Update / Github issues:*


At the beginning of the November, we had close to 800 bugs which were not
closed (ie, NEW/ASSIGNED/POST/MODIFIED states). By end of November, that
list is shrinked to ~610.

For current bug count check report @ https://red.ht/2Url7gj

Note that, this is also because of bulk movement of RFE bugs to github
issues.

Other than that, we have taken decision to mark the bugs as CLOSED -
NEXTRELEASE when the patch which has ‘fixes: bz#NNNN’ gets merged, the
script which bulk closes the bugs after release happens, to CLOSED -
CURRENTRELEASE to make sure the bugzilla status is properly maintained.

About Github issues, there was a slow rate of triaging, and also due to
bulk RFE migration from bugzilla, we now have more than 330 issues.


   -

   *Focus areas for December / January*


Most of the december, the plan is to work on proposed list of activities
for GlusterFS 6.0 release.

The github issues which we are working on can be found at
https://github.com/gluster/glusterfs/milestone/8

For those who may not click on the link, some key highlights are:


   -

   More focus on cleanup sequence, and memory corruption/leak issues around
   the area.
   -

   Scalability of number of volumes
   -

   Client side inode lru limit, which should allow us to control client
   memory usage.
   -

   Reflink and copy-file-range support.
   -

   Fencing support for virtual block files hosted on glusterfs.
   -

   Development API cleanup, and re-org
   -

      This should allow us to host translators outside of glusterfs
      repository.
      -

      Can allow different vendors, developers to have their own
      translators, which they can add in the graph by placing them in the
      relevant path.
      -

   Performance enhancements:
   -

      Slow ‘ls’ (directory listing speed)
      -

      Evaluation of locked regions, and see what we can improve.
      -

      Plan for nightly performance runs of upstream bits, so we don’t
      regress a lot from some patches.


Regards,
Amar, Shyam, Xavi.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181211/75b77f54/attachment.html>


More information about the Gluster-users mailing list