[Gluster-Maintainers] Gluster-devel Digest, Vol 31, Issue 61

Mohit Agrawal moagrawa at redhat.com
Thu Oct 27 10:13:41 UTC 2016


Hi,

I have done some basic testing specific to SSL component on tar(
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz
).

1) After enable SSL(io and mgmt encryption ) mount is working(for
distributed/replicated) and able to transfer data on volume.
2) reconnection is working after disconnect.

Regards
Mohit Agrawal




On Thu, Oct 27, 2016 at 8:30 AM, <gluster-devel-request at gluster.org> wrote:

> Send Gluster-devel mailing list submissions to
>         gluster-devel at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.gluster.org/mailman/listinfo/gluster-devel
> or, via email, send a message with subject or body 'help' to
>         gluster-devel-request at gluster.org
>
> You can reach the person managing the list at
>         gluster-devel-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-devel digest..."
>
>
> Today's Topics:
>
>    1. Memory management and friends (Oleksandr Natalenko)
>    2. Re: Gluster Test Thursday - Release 3.9 (Aravinda)
>    3. Re: Multiplexing status, October 26 (Jeff Darcy)
>    4. Re: [Gluster-Maintainers] Gluster Test Thursday - Release 3.9
>       (Niels de Vos)
>    5. Re: [Gluster-Maintainers] glusterfs-3.9.0rc2      released
>       (Kaleb S. KEITHLEY)
>    6. automating straightforward backports (Pranith Kumar Karampuri)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 26 Oct 2016 16:27:40 +0200
> From: Oleksandr Natalenko <oleksandr at natalenko.name>
> To: Gluster Devel <gluster-devel at gluster.org>
> Subject: [Gluster-devel] Memory management and friends
> Message-ID: <1c49cb1391aaeda5b3fb129ccbf66c83 at natalenko.name>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Hello.
>
> As a result of today's community meeting I start dedicated ML thread for
> gathering memory management issues together to make it possible to
> summarize them and construct some plan what to do next.
>
> Very important notice: I'm not the active GlusterFS developer, but
> gained excessive experience with GlusterFS in the past at previous work,
> and the main issue that was chasing me all the time was memory leaking.
> Consider this as a request for action from GlusterFS customer,
> apparently approved by Kaushal and Amye during last meeting :).
>
> So, here go key points.
>
> 1) Almost all nasty and obvious memory leaks have been successfully
> fixed during the last year, and that allowed me to run GlusterFS in
> production at previous work for almost all types of workload except one
> ? dovecot mail storage. The specific of this workload is that it
> involved huge amount of files, and I assume this to be kinda of edge
> case unhiding some dark corners of GlusterFS memory management. I was
> able to provide Nithya with Valgrind+Massif memory profiling results and
> test case, and that helped her to prepare at least 1 extra fix (and more
> to come AFAIK), which has some deal with readdirp-related code.
> Nevertheless, it is reported that this is not the major source of
> leaking. Nithya suspect that memory gets fragmented heavily due to lots
> of small allocations, and memory pools cannot cope with this kind of
> fragmentation under constant load.
>
> Related BZs:
>
>    * https://bugzilla.redhat.com/show_bug.cgi?id=1369364
>    * https://bugzilla.redhat.com/show_bug.cgi?id=1380249
>
> People involved:
>
>    * nbalacha, could you please provide more info on your findings?
>
> 2) Meanwhile, Jeff goes on with brick multiplexing feature, facing some
> issue with memory management too and blaming memory pools for that.
>
> Related ML email:
>
>    *
> http://www.gluster.org/pipermail/gluster-devel/2016-October/051118.html
>    *
> http://www.gluster.org/pipermail/gluster-devel/2016-October/051160.html
>
> People involved:
>
>    * jdarcy, have you discussed this outside of ML? It seems your email
> didn't get proper attention.
>
> 3) We had brief discussion with obnox and anoopcs on #gluster-meeting
> and #gluster-dev regarding jemalloc and talloc. obnox believes that we
> may use both, jemalloc for substituting malloc/free, talloc for
> rewriting memory management for GlusterFS properly.
>
> Related logs:
>
>    *
> https://botbot.me/freenode/gluster-dev/2016-10-26/?msg=75501394&page=2
>
> People involved:
>
>    * obnox, could you share your ideas on this?
>
> To summarize:
>
> 1) we need key devs involved in memory management to share their ideas;
> 2) using production-proven memory allocators and memory pools
> implementation is desired;
> 3) someone should manage the workflow of reconstructing memory
> management.
>
> Feel free to add anything I've missed.
>
> Regards,
>    Oleksandr
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 26 Oct 2016 20:04:54 +0530
> From: Aravinda <avishwan at redhat.com>
> To: Gluster Devel <gluster-devel at gluster.org>,  GlusterFS Maintainers
>         <maintainers at gluster.org>
> Subject: Re: [Gluster-devel] Gluster Test Thursday - Release 3.9
> Message-ID: <23162ff6-3f9e-957b-fb8c-7ad9924e1434 at redhat.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Gluster 3.9.0rc2 tarball is available here
> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-
> 3.9.0rc2.tar.gz
>
> regards
> Aravinda
>
> On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:
> > Hi,
> >
> > Since Automated test framework for Gluster is in progress, we need
> > help from Maintainers and developers to test the features and bug
> > fixes to release Gluster 3.9.
> >
> > In last maintainers meeting Shyam shared an idea about having a Test
> > day to accelerate the testing and release.
> >
> > Please participate in testing your component(s) on Oct 27, 2016. We
> > will prepare the rc2 build by tomorrow and share the details before
> > Test day.
> >
> > RC1 Link:
> > http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
> > Release Checklist:
> > https://public.pad.fsfe.org/p/gluster-component-release-checklist
> >
> >
> > Thanks and Regards
> > Aravinda and Pranith
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 26 Oct 2016 10:58:25 -0400 (EDT)
> From: Jeff Darcy <jdarcy at redhat.com>
> To: Gluster Devel <gluster-devel at gluster.org>
> Subject: Re: [Gluster-devel] Multiplexing status, October 26
> Message-ID:
>         <1756619054.12954195.1477493905921.JavaMail.zimbra at redhat.com>
> Content-Type: text/plain; charset=utf-8
>
> Here are some of the numbers.  Note that these are *without* multiplexing,
> which is where these changes are really most beneficial, because I wanted
> to measure the effect of this patch on its own.  Also, perf-test.sh is a
> truly awful test.  For one thing it's entirely single-threaded, which
> limits is usefulness in general and reduces that usefulness to near (or
> below) zero for any change focused on higher levels of parallelism.  Also,
> it's very unbalanced, testing certain operations for over twenty minutes
> and others for mere fractions of a second.  Lastly, it takes way too long
> to run - over two hours on my machines.  Tying up a test machine for a
> whole day running tests before and after a change three times each (as I
> was asked to do) is not a good use of resources.  What I actually ran is a
> reduced and rebalanced version, which takes about fifteen minutes.  I also
> ran my own test that I developed for multiplexing - twenty volumes/mounts,
> eighty bricks, a hundred client I/O
>   threads creating/writing/deleting files.  It drives load literally a
> hundred times higher, and really tests scalability.
>
> Full results are below.  To summarize, the patch seems to improve
> performance on perf-test on 9/17 tests.  Overall it's either 0.6% slower or
> 2.3% faster depending on how you combine the per-test results.  On my own
> loadtest it's 1.9% slower.  In other words, for the *non-multiplexing* case
> (which will soon be obsolete), we're either below the measurement-error
> threshold or truly slower by a tiny amount.  With multiplexing and other
> multiplexing-related changes (e.g. http://review.gluster.org/#/c/15645/)
> we'll be way ahead on any realistic test.
>
> *** PERF-TEST WITHOUT PATCH
>
> Testname                Time
> emptyfiles_create       49.57
> emptyfiles_delete       22.05
> smallfiles_create       98.70
> smallfiles_rewrite      94.14
> smallfiles_read         28.01
> smallfiles_reread       10.76
> smallfiles_delete       23.25
> largefile_create        26.58
> largefile_rewrite       59.18
> largefile_read          20.43
> largefile_reread        1.35
> largefile_delete        0.59
> directory_crawl_create  126.85
> directory_crawl         8.78
> directory_recrawl       6.81
> metadata_modify         132.44
> directory_crawl_delete  53.75
>
> real    13m57.254s
> user    0m16.929s
> sys     0m44.114s
>
> *** LOADTEST WITHOUT PATCH
>
> real    5m15.483s
> user    0m0.724s
> sys     0m16.686s
>
> *** PERF-TEST WITH PATCH
>
> Testname                Time
> emptyfiles_create       48.73   -  1.7%
> emptyfiles_delete       23.41   +  6.2%
> smallfiles_create       92.06   -  6.7%
> smallfiles_rewrite      86.33   -  8.3%
> smallfiles_read         28.48   +  1.7%
> smallfiles_reread       11.56   +  7.4%
> smallfiles_delete       22.99   -  1.1%
> largefile_create        22.76   -  2.1%
> largefile_rewrite       60.94   +  3.0%
> largefile_read          18.67   -  8.6%
> largefile_reread        1.52    + 12.6%
> largefile_delete        0.55    -  6.8%
> directory_crawl_create  125.47  -  1.1%
> directory_crawl         10.19   + 16.1%
> directory_recrawl       6.58    -  3.4%
> metadata_modify         125.29  -  5.4%
> directory_crawl_delete  55.63   +  3.5%
>                                 +  0.6%
>
> real    13m38.115s -2.3%
> user    0m16.197s
> sys     0m43.057s
>
> *** LOADTEST WITH PATCH
>
> real    5m21.479s + 1.9%
> user    0m0.685s
> sys     0m17.260s
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 26 Oct 2016 20:13:15 +0200
> From: Niels de Vos <ndevos at redhat.com>
> To: "Kaleb S. KEITHLEY" <kkeithle at redhat.com>
> Cc: GlusterFS Maintainers <maintainers at gluster.org>,    Gluster Devel
>         <gluster-devel at gluster.org>
> Subject: Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test
>         Thursday -      Release 3.9
> Message-ID: <20161026181315.GC2936 at ndevos-x240.usersys.redhat.com>
> Content-Type: text/plain; charset="us-ascii"
>
> On Tue, Oct 25, 2016 at 01:11:26PM -0400, Kaleb S. KEITHLEY wrote:
> > On 10/25/2016 12:11 PM, Niels de Vos wrote:
> > > On Tue, Oct 25, 2016 at 07:51:47AM -0400, Kaleb S. KEITHLEY wrote:
> > >> On 10/25/2016 06:46 AM, Atin Mukherjee wrote:
> > >>>
> > >>>
> > >>> On Tue, Oct 25, 2016 at 4:12 PM, Aravinda <avishwan at redhat.com
> > >>> <mailto:avishwan at redhat.com>> wrote:
> > >>>
> > >>>     Hi,
> > >>>
> > >>>     Since Automated test framework for Gluster is in progress, we
> need
> > >>>     help from Maintainers and developers to test the features and bug
> > >>>     fixes to release Gluster 3.9.
> > >>>
> > >>>     In last maintainers meeting Shyam shared an idea about having a
> Test
> > >>>     day to accelerate the testing and release.
> > >>>
> > >>>     Please participate in testing your component(s) on Oct 27, 2016.
> We
> > >>>     will prepare the rc2 build by tomorrow and share the details
> before
> > >>       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > >>>     Test day.
> > >>>
> > >>>     RC1 Link:
> > >>>     http://www.gluster.org/pipermail/maintainers/2016-September
> /001442.html
> > >>>     <http://www.gluster.org/pipermail/maintainers/2016-Septembe
> r/001442.html>
> > >>>
> > >>>
> > >>> I don't think testing RC1 would be ideal as 3.9 head has moved
> forward
> > >>> with significant number of patches. I'd recommend of having an RC2
> here.
> > >>>
> > >>
> > >> BTW, please tag RC2 as 3.9.0rc2 (versus 3.9rc2).  It makes building
> > >> packages for Fedora much easier.
> > >>
> > >> I know you were following what was done for 3.8rcX. That was a pain.
> :-}
> > >
> > > Can you explain what the problem is with 3.9rc2 and 3.9.0? The huge
> > > advantage is that 3.9.0 is seen as a version update to 3.9rc2. When
> > > 3.9.0rc2 is used, 3.9.0 is *not* an update for that, and rc2 packages
> > > will stay installed until 3.9.1 is released...
> > >
> > > You can check this easily with the rpmdev-vercmp command:
> > >
> > >    $ rpmdev-vercmp 3.9.0rc2 3.9.0
> > >    3.9.0rc2 > 3.9.0
> > >    $ rpmdev-vercmp 3.9rc2 3.9.0
> > >    3.9rc2 < 3.9.0
> >
> > Those aren't really very realistic RPM NVRs IMO.
> >
> > >
> > > So, at least for RPM packaging, 3.9rc2 is recommended, and 3.9.0rc2 is
> > > problematic.
> >
> > That's not the only thing recommended.
> >
> > Last I knew, one of several things that are recommended is, e.g.,
> > 3.9.0-0.2rc2; 3.9.0-1 > 3.9.0-0.2rc2.
>
> Yes, we can add a 0. in the release field of the RPMs. That works fine,
> but needs manual adoption of the .spec and is not done by the scripts we
> have that get called from 'make -C extras/LinuxRPM glusterrpms'. This
> means that RPMs build from the source (what developers do) and nightly
> builds need to be treated differently.
>
> > The RC (and {qa,alpha,beta}) packages (that I've) built for Fedora for
> > several years have had NVRs in that form.
> >
> > This scheme was what was suggested to me on the fedora-devel mailing
> > list several years ago.
>
> Indeed, and this is common for Fedora packages. Maybe we should adapt
> that in our community RPMs too.
>
> > When RCs are tagged as 3.9rc1, then I have to make non-trivial and
> > counter-intuitive changes to the .spec file to build packages with NVRs
> > like 3.9.0-0.XrcY. If they are tagged 3.9.0rc1 then the changes much
> > more straight forward and much simpler.
>
> Yes, that is, if you want to have the 3.9.0 version, and do not like to
> take the 3.9rc2 version directly.
>
> We probably should address the pre-release tagging in our build scripts,
> so that the next release can easily be tagged v3.10.0rc1 or such.
>
> Thanks!
> Niels
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 801 bytes
> Desc: not available
> URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/
> 20161026/f2114f7b/attachment-0001.sig>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 26 Oct 2016 14:49:24 -0400
> From: "Kaleb S. KEITHLEY" <kkeithle at redhat.com>
> To: Gluster Devel <gluster-devel at gluster.org>,
>         maintainers at gluster.org,        packaging at gluster.org
> Subject: Re: [Gluster-devel] [Gluster-Maintainers] glusterfs-3.9.0rc2
>         released
> Message-ID: <390562a6-99b7-624f-cd83-b8ba49cdaddf at redhat.com>
> Content-Type: text/plain; charset=utf-8
>
> On 10/26/2016 10:29 AM, Gluster Build System wrote:
> >
> >
> > SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-
> 3.9.0rc2.tar.gz
> >
>
> WRT GlusterFS 3.9.0rc2 testing day tomorrow (Thursday, 27 October):
>
> There are packages for Fedora 23, 24, 25; and EPEL 6 and 7 at [1].
>
> There are also packages for Fedora 26 (rawhide) in Fedora Updates.
>
> I will try to get packages for Debian 8 (jessie) by tomorrow. They will
> be at the same location. I may also try to get Ubuntu packages. If I do
> I will announce the location, i.e. they may be somewhere else, e.g.
> launchpad.
>
> [1] https://download.gluster.org/pub/gluster/glusterfs/3.9/3.9.0rc2/
>
> --
>
> Kaleb
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 27 Oct 2016 08:30:07 +0530
> From: Pranith Kumar Karampuri <pkarampu at redhat.com>
> To: Gluster Devel <gluster-devel at gluster.org>
> Subject: [Gluster-devel] automating straightforward backports
> Message-ID:
>         <CAOgeEnaen8OS4K=-fE20PL-VBcJSKBmgpUGzTFuyU7B39zJidg at mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> hi,
>      Nowadays I am seeing quite a few patches are straightforward backports
> from master. But if I follow the process it generally takes around 10
> minutes to complete porting each patch. I was wondering if anyone else
> looked into automating this. Yesterday I had to backport
> http://review.gluster.org/15728 to 3.9, 3.8, 3.7. So I finally took some
> time to automate portions of the workflow. I want to exchange ideas you may
> be using to achieve the same.
>
> Here is how I automated portions:
> 1) Cloning bug to different branches:
>      Not automated: It seems like bugzilla CLI doesn't allow cloning of the
> bug :-(. Anyone knows if we can write a script which interacts with the
> website to achieve this?
>
> 2) Porting the patch to the branches: Wrote the following script which will
> do the porting adding prefix " >" to the commit-headers
> ===================================================
> ? cat ../backport.sh
> #!/bin/bash
> #launch it like this: BRANCHES="3.9 3.8 3.7" ./backport.sh
> <branch-name-prefix> <commit-hash-to-be-backported>
>
> prefix=$1
> shift
> commit=$1
> shift
>
> function add_prefix_to_commit_headers {
> #We have the habit of adding ' >' for the commit headers
>         for i in BUG Change-Id Signed-off-by Reviewed-on Smoke
> NetBSD-regression Reviewed-by CentOS-regression; do sed -i -e "s/^$i:/
> >$i:/" commit-msg; done
> }
>
> function form_commit_msg {
> #Get the commit message out of the commit
>         local commit=$1
>         git log --format=%B -n 1 $commit > commit-msg
> }
>
> function main {
>         cur_branch=$(git rev-parse --abbrev-ref HEAD)
>         form_commit_msg $commit
>         add_prefix_to_commit_headers;
>         rm -f branches;
>         for i in $BRANCHES; do cp commit-msg ${i}-commit-msg && git
> checkout -b ${prefix}-${i} origin/release-${i} > /dev/null && git
> cherry-pick $commit && git commit -s --amend -F ${i}-commit-msg && echo
> ${prefix}-${i} >> branches; done
>         git checkout $cur_branch
> }
>
> main
> ===================================================
>
> 3) Adding reviewers, triggering regressions, smoke:
>      I have been looking around for good gerrit-cli, at the moment, I am
> happy with the gerrit CLI which is installed through npm. So you need to
> first install npm on your box and then do 'npm install gerrit'
>      Go to the branch from where we did the commit and do:
>         # gerrit assign xhernandez at datalab.es - this will add Xavi as
> reviewer for the patch that I just committed.
>         # gerrit comment "recheck smoke"
>         # gerrit comment "recheck centos"
>         # gerrit comment "recheck netbsd"
>
> 4) I am yet to look into bugzilla cli to come up with the command to move
> the bugs into POST, but may be Niels has it at his fingertips?
>
> Main pain point has been cloning the bugs. If we have an automated way to
> clone the bug to different branches. The script at 2) can be modified to
> add all the steps.
> If we can clone the bug and get the bz of the cloned bug, then we can add
> "BUG: <bz>" to the commit-message and launch rfc.sh which won't prompt for
> anything. We can auto answer coding-guidelines script by launching "yes |
> rfc.sh" if we really want to.
>
> PS: The script is something I hacked together for one time use yesterday.
> Not something I guessed I would send a mail about today so it is not all
> that good looking. Just got the job done yesterday.
>
> --
> Pranith
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/
> 20161027/948757a8/attachment.html>
>
> ------------------------------
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
> End of Gluster-devel Digest, Vol 31, Issue 61
> *********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/maintainers/attachments/20161027/3fbd38ed/attachment-0001.html>


More information about the maintainers mailing list