[Gluster-devel] Fwd: Behaviour of glfs_fini() affecting QEMU

Deepak Shetty dpkshetty at gmail.com
Fri Apr 25 09:40:41 UTC 2014


Since nongnu.org was down, I don't see the below mail (avati's and soumya's
response) for this mail thread.
Surpsingly these are not seen in either the nongnu.org or gluster.orgarchives!

Hence fwding it here in the hope that others can see it.

---------- Forwarded message ----------
From: Bharata B Rao <bharata.rao at gmail.com>
Date: Fri, Apr 25, 2014 at 3:03 PM
Subject: Fwd: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU
To: Deepak Shetty <dpkshetty at gmail.com>




---------- Forwarded message ----------
From: Soumya Koduri <skoduri at redhat.com>
Date: Fri, Apr 25, 2014 at 12:44 PM
Subject: Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU
To: Anand Avati <avati at gluster.org>
Cc: Bharata B Rao <bharata.rao at gmail.com>, "Gluster Devel
pgurusid at redhat.com" <gluster-devel at nongnu.org>


Hi Anand,

Sure. We will then take care of entire cleanup in "glfs_fini()" itself.

Thanks,
Soumya


On 04/25/2014 11:57 AM, Anand Avati wrote:

> Hi Soumya,
>
> Let's have just one cleanup API (which is already called glfs_fini)
> which handles all cases (whether or not glfs_init() was called, whether
> or not it was successful) and frees up all resources. Not only is it a
> bad idea to ask existing applications to make changes to their programs
> to accommodate a new API, it is just too much hassle to make cleanup
> anything more than a single API call.
>
>
>
> On Wed, Apr 23, 2014 at 10:00 AM, Soumya Koduri <skoduri at redhat.com
> <mailto:skoduri at redhat.com>> wrote:
>
>     Hi Bharata,
>
>     A quick update on this . In the current implementation, we are not
>     cleaning up all the memory allocated via "glfs_new" routine (in
>     "glfs_fini" i.e, even when glfs_init was done). So after a couple of
>     discussions, we have decided to first define a counter cleanup
>     routine for glfs_new (may be glfs_destroy as Deepak had suggested)
>     to cleanup that memory - Poornima has started working on this and
>     then take a call on to whether
>
>     * modify glfs_fini itself to detect the init_not_done cases ( Note -
>     looks like this check is not so straightforward. We need to come up
>     with some method to detect such scenarios) and do the necessary
>     cleanup which would mean no changes on the gfapi clients side.
>     or
>     * document and ask the gfapi clients to update their code and call
>     glfs_destroy incase of such failures as suggested by Deepak. This
>     seems much cleaner way to address the problem now.
>
>     Meanwhile can you please comment on how would it impact Qemu if it
>     needs to make an additional call to the libgfapi for the cleanup.
>
>     Thanks,
>     Soumya
>
>
>     ----- Original Message -----
>     From: "Deepak Shetty" <dpkshetty at gmail.com <mailto:dpkshetty at gmail.com
> >>
>     To: "Soumya Koduri" <skoduri at redhat.com <mailto:skoduri at redhat.com>>
>     Cc: "Bharata B Rao" <bharata.rao at gmail.com
>     <mailto:bharata.rao at gmail.com>>, "Gluster Devel"
>     <gluster-devel at nongnu.org <mailto:gluster-devel at nongnu.org>>
>     Sent: Sunday, April 20, 2014 11:59:40 PM
>     Subject: Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU
>
>     One more late thought...
>         Maybe this should show up as "known issues" in the recently
> released
>     gluster 3.5 beta and 3.5 GA release notes (unless fixed, then it
>     shud show
>     up in FAQs on gluster.org <http://gluster.org>)
>
>
>     Can someone from gluster release mgmt take note of this pls ?
>
>     thanx,
>     deepak
>
>
>     On Sun, Apr 20, 2014 at 11:57 PM, Deepak Shetty <dpkshetty at gmail.com
>     <mailto:dpkshetty at gmail.com>> wrote:
>
>      > This also tells us that the gfapi based validation/QE testcases
>     needs to
>      > take this scenario in to account
>      > so that in future this can be caught sooner :)
>      >
>      > Bharata,
>      >     Does the existing QEMU testcase for gfapi cover this ?
>      >
>      > thanx,
>      > deepak
>      >
>      >
>      > On Fri, Apr 18, 2014 at 8:23 PM, Soumya Koduri
>     <skoduri at redhat.com <mailto:skoduri at redhat.com>> wrote:
>      >
>      >> Posted my comments in the bug link.
>      >>
>      >> " glfs_init" cannot be called before as it checks for
>      >> cmds_args->volfile_server which is initialized only in
>      >> "glfs_set_volfile_server".
>      >> As Deepak had mentioned, we should either define a new routine
>     to do the
>      >> cleanup incase of init not done or rather modify "glfs_fini" to
>     handle this
>      >> special case as well which is better approach IMO as it wouldn't
>     involve
>      >> any changes in the applications using libgfapi.
>      >>
>      >> Thanks,
>      >> Soumya
>      >>
>      >>
>      >> ----- Original Message -----
>      >> From: "Bharata B Rao" <bharata.rao at gmail.com
>     <mailto:bharata.rao at gmail.com>>
>      >> To: "Deepak Shetty" <dpkshetty at gmail.com
>     <mailto:dpkshetty at gmail.com>>
>      >> Cc: "Gluster Devel" <gluster-devel at nongnu.org
>     <mailto:gluster-devel at nongnu.org>>
>      >> Sent: Friday, April 18, 2014 8:31:28 AM
>      >> Subject: Re: [Gluster-devel] Behaviour of glfs_fini() affecting
> QEMU
>      >>
>      >> On Thu, Apr 17, 2014 at 7:56 PM, Deepak Shetty <
>     dpkshetty at gmail.com <mailto:dpkshetty at gmail.com> >
>
>      >> wrote:
>      >>
>      >>
>      >>
>      >>
>      >> The glfs_lock indeed seems to work only when glfs_init is
>     succesfull!
>      >> We can call glfs_unset_volfile_server for the error case of
>      >> glfs_set_volfile_server as a good practice.
>      >> But it does look like we need a opposite of glfs_new (maybe
>     glfs_destroy)
>      >> for cases like these to clenaup stuff that glfs_new() allocated
>      >>
>      >> thats my 2 cents... hope to hear from other gluster core folks
>     on this
>      >>
>      >> There is a launchpad bug tracking this at
>      >> https://bugs.launchpad.net/qemu/+bug/1308542
>      >>
>      >> Regards,
>      >> Bharata.
>      >>
>      >> _______________________________________________
>      >> Gluster-devel mailing list
>      >> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
>
>      >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>      >>
>      >
>      >
>
>     _______________________________________________
>     Gluster-devel mailing list
>     Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
>     https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>


-- 
http://raobharata.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140425/f6339c50/attachment-0003.html>


More information about the Gluster-devel mailing list