From hgowtham at redhat.com Thu Aug 1 06:51:48 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Thu, 1 Aug 2019 12:21:48 +0530 Subject: [Gluster-devel] Release 6.5: Expected tagging on 5th August Message-ID: Hi, Expected tagging date for release-6.5 is on August, 5th, 2019. Please ensure required patches are backported and also are passing regressions and are appropriately reviewed for easy merging and tagging on the date. -- Regards, Hari Gowtham. From hgowtham at redhat.com Thu Aug 1 08:35:02 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Thu, 1 Aug 2019 14:05:02 +0530 Subject: [Gluster-devel] Release 5.9: Expected tagging on 5th August Message-ID: Hi, Expected tagging date for release-5.9 is on August, 5th, 2019. Please ensure required patches are backported and also are passing regressions and are appropriately reviewed for easy merging and tagging on the date. -- Regards, Hari Gowtham. From miklos at szeredi.hu Thu Aug 1 10:35:03 2019 From: miklos at szeredi.hu (Miklos Szeredi) Date: Thu, 1 Aug 2019 12:35:03 +0200 Subject: [Gluster-devel] [PATCH, RESEND3] fuse: require /dev/fuse reads to have enough buffer capacity (take 2) In-Reply-To: <20190724094556.GA19383@deco.navytux.spb.ru> References: <20190724094556.GA19383@deco.navytux.spb.ru> Message-ID: On Wed, Jul 24, 2019 at 11:46 AM Kirill Smelkov wrote: > > Miklos, > > I was sending this patch for ~1.5 month without any feedback from you[1,2,3]. > The patch was tested by Sander Eikelenboom (original GlusterFS problem > reporter)[4], and you said that it will be ok to retry for next > cycle[5]. I was hoping for this patch to be picked up for 5.3 and queued > to Linus's tree, but in despite several resends from me (the same patch; > just reminders) nothing is happening. v5.3-rc1 came out on last Sunday, > which, in my understanding, denotes the close of 5.3 merge window. What > is going on? Could you please pick up the patch and handle it? Applied. Thanks, Miklos From kirr at nexedi.com Thu Aug 1 13:50:01 2019 From: kirr at nexedi.com (Kirill Smelkov) Date: Thu, 01 Aug 2019 13:50:01 +0000 Subject: [Gluster-devel] [PATCH, RESEND3] fuse: require /dev/fuse reads to have enough buffer capacity (take 2) In-Reply-To: References: <20190724094556.GA19383@deco.navytux.spb.ru> Message-ID: <20190801134955.GA18544@deco.navytux.spb.ru> On Thu, Aug 01, 2019 at 12:35:03PM +0200, Miklos Szeredi wrote: > On Wed, Jul 24, 2019 at 11:46 AM Kirill Smelkov wrote: > > > > Miklos, > > > > I was sending this patch for ~1.5 month without any feedback from you[1,2,3]. > > The patch was tested by Sander Eikelenboom (original GlusterFS problem > > reporter)[4], and you said that it will be ok to retry for next > > cycle[5]. I was hoping for this patch to be picked up for 5.3 and queued > > to Linus's tree, but in despite several resends from me (the same patch; > > just reminders) nothing is happening. v5.3-rc1 came out on last Sunday, > > which, in my understanding, denotes the close of 5.3 merge window. What > > is going on? Could you please pick up the patch and handle it? > > Applied. Thanks... From skoduri at redhat.com Thu Aug 1 17:06:09 2019 From: skoduri at redhat.com (Soumya Koduri) Date: Thu, 1 Aug 2019 22:36:09 +0530 Subject: [Gluster-devel] [Gluster-users] Release 6.5: Expected tagging on 5th August In-Reply-To: References: Message-ID: <0ef5691c-4538-9bb9-76fa-e1c4eaabb54f@redhat.com> Hi Hari, [1] is a critical patch which addresses issue affecting upcall processing by applications such as NFS-Ganesha. As soon as it gets merged in master, I shall backport it to release-7/6/5 branches. Kindly consider the same. Thanks, Soumya [1] https://review.gluster.org/#/c/glusterfs/+/23108/ On 8/1/19 12:21 PM, Hari Gowtham wrote: > Hi, > > Expected tagging date for release-6.5 is on August, 5th, 2019. > > Please ensure required patches are backported and also are passing > regressions and are appropriately reviewed for easy merging and tagging > on the date. > From hgowtham at redhat.com Fri Aug 2 01:14:25 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Fri, 2 Aug 2019 06:44:25 +0530 Subject: [Gluster-devel] [Gluster-users] Release 6.5: Expected tagging on 5th August In-Reply-To: <0ef5691c-4538-9bb9-76fa-e1c4eaabb54f@redhat.com> References: <0ef5691c-4538-9bb9-76fa-e1c4eaabb54f@redhat.com> Message-ID: Hi Soumya, Thanks for the update. Will keep an eye on it. Regards, Hari. On Thu, 1 Aug, 2019, 10:36 PM Soumya Koduri, wrote: > Hi Hari, > > [1] is a critical patch which addresses issue affecting upcall > processing by applications such as NFS-Ganesha. As soon as it gets > merged in master, I shall backport it to release-7/6/5 branches. Kindly > consider the same. > > Thanks, > Soumya > > [1] https://review.gluster.org/#/c/glusterfs/+/23108/ > > On 8/1/19 12:21 PM, Hari Gowtham wrote: > > Hi, > > > > Expected tagging date for release-6.5 is on August, 5th, 2019. > > > > Please ensure required patches are backported and also are passing > > regressions and are appropriately reviewed for easy merging and tagging > > on the date. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Aug 5 01:45:04 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 5 Aug 2019 01:45:04 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1356453982.43.1564969505007.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1733667 / bitrot: glusterfs brick process core https://bugzilla.redhat.com/1731041 / build: GlusterFS fails on RHEL-8 during build. https://bugzilla.redhat.com/1730433 / build: Gluster release 6 build errors on ppc64le https://bugzilla.redhat.com/1734692 / core: brick process coredump while running bug-1432542-mpx-restart-crash.t in a virtual machine https://bugzilla.redhat.com/1736564 / core: GlusterFS files missing randomly. https://bugzilla.redhat.com/1737141 / fuse: read() returns more than file size when using direct I/O https://bugzilla.redhat.com/1730565 / geo-replication: Geo-replication does not sync default ACL https://bugzilla.redhat.com/1736848 / glusterd: Execute the "gluster peer probe invalid_hostname" thread deadlock or the glusterd process crashes https://bugzilla.redhat.com/1734027 / glusterd: glusterd 6.4 memory leaks 2-3 GB per 24h (OOM) https://bugzilla.redhat.com/1728183 / gluster-smb: SMBD thread panics on file operations from Windows, OS X and Linux when using vfs_glusterfs https://bugzilla.redhat.com/1736481 / posix: capture stat failure error while setting the gfid https://bugzilla.redhat.com/1731067 / project-infrastructure: Need nightly build for release 7 branch [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 1634 bytes Desc: not available URL: From chge at linux.alibaba.com Mon Aug 5 10:01:51 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Mon, 5 Aug 2019 18:01:51 +0800 Subject: [Gluster-devel] [RFC] What if client fuse process crash? Message-ID: Hi list, If somehow, glusterfs client fuse process dies. All subsequent file operations will be failed with error 'no connection'. I am curious if the only way to recover is umount and mount again? If so, that means all processes working on top of glusterfs have to close files, which sometimes is hard to be acceptable. Thanks, Changwei From ravishankar at redhat.com Tue Aug 6 05:12:38 2019 From: ravishankar at redhat.com (Ravishankar N) Date: Tue, 6 Aug 2019 10:42:38 +0530 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: References: Message-ID: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> On 05/08/19 3:31 PM, Changwei Ge wrote: > Hi list, > > If somehow, glusterfs client fuse process dies. All subsequent file > operations will be failed with error 'no connection'. > > I am curious if the only way to recover is umount and mount again? Yes, this is pretty much the case with all fuse based file systems. You can use -o auto_unmount (https://review.gluster.org/#/c/17230/) to automatically cleanup and not having to manually unmount. > > If so, that means all processes working on top of glusterfs have to > close files, which sometimes is hard to be acceptable. There is https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, which claims to provide a framework for transparent failovers.? I can't find any publicly available code though. Regards, Ravi > > > Thanks, > > Changwei > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From pgurusid at redhat.com Tue Aug 6 05:33:40 2019 From: pgurusid at redhat.com (pgurusid at redhat.com) Date: Tue, 06 Aug 2019 05:33:40 +0000 Subject: [Gluster-devel] Canceled event: Gluster Community Meeting (APAC friendly hours) @ Every 2 weeks from 11:30am to 12:30pm on Tuesday 15 times (IST) (gluster-devel@gluster.org) Message-ID: <000000000000ce54b2058f6c2acb@google.com> This event has been canceled. Title: Gluster Community Meeting (APAC friendly hours) Bridge: https://bluejeans.com/836554017 Meeting minutes: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both Previous Meeting notes: http://github.com/gluster/community When: Every 2 weeks from 11:30am to 12:30pm on Tuesday 15 times India Standard Time - Kolkata Where: https://bluejeans.com/836554017 Calendar: gluster-devel at gluster.org Who: * pgurusid at redhat.com - organizer * gluster-users at gluster.org * maintainers at gluster.org * gluster-devel at gluster.org * ranaraya at redhat.com * khiremat at redhat.com * dcunningham at voisonics.com * rwareing at fb.com * kdhananj at redhat.com * pkarampu at redhat.com * mark.boulton at uwa.edu.au * sunkumar at redhat.com * gabriel.lindeborg at svenskaspel.se * m.vrgotic at activevideo.com * david.spisla at iternity.com * sthomas at rpstechnologysolutions.co.uk * javico at paradigmadigital.com * philip.ruenagel at gmail.com * pauyeung at connexity.com * Max de Graaf * sstephen at redhat.com * jpark at dexyp.com * spalai at redhat.com * rouge2507 at gmail.com * spentaparthi at idirect.net * duprel at email.sc.edu * dan at clough.xyz * m.ragusa at eurodata.de * barchu02 at unm.edu * brian.riddle at storagecraft.com * ryan_groth at wgbh.org * amnerip at fb.com * dph at fb.com Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5622 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 5724 bytes Desc: not available URL: From pgurusid at redhat.com Tue Aug 6 05:33:49 2019 From: pgurusid at redhat.com (pgurusid at redhat.com) Date: Tue, 06 Aug 2019 05:33:49 -0000 Subject: [Gluster-devel] Cancelled: Gluster Community Meeting (APAC friendly hours) @ Monday, 13 May 2019 Message-ID: <2111025529.1357.1565069627972.JavaMail.yahoo@tardis002.cal.bf1.yahoo.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2587 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2587 bytes Desc: not available URL: From pgurusid at redhat.com Tue Aug 6 05:33:50 2019 From: pgurusid at redhat.com (pgurusid at redhat.com) Date: Tue, 06 Aug 2019 05:33:50 -0000 Subject: [Gluster-devel] Cancelled: Gluster Community Meeting (APAC friendly hours) @ Monday, 13 May 2019 Message-ID: <1851565590.1343.1565069627275.JavaMail.yahoo@tardis002.cal.bf1.yahoo.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2589 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2589 bytes Desc: not available URL: From pgurusid at redhat.com Tue Aug 6 05:33:58 2019 From: pgurusid at redhat.com (pgurusid at redhat.com) Date: Tue, 06 Aug 2019 05:33:58 -0000 Subject: [Gluster-devel] Cancelled: Gluster Community Meeting (APAC friendly hours) @ Monday, 13 May 2019 Message-ID: <2052996072.649.1565069632934.JavaMail.yahoo@tardis97.cal.gq1.yahoo.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2583 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2583 bytes Desc: not available URL: From chge at linux.alibaba.com Tue Aug 6 06:14:33 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Tue, 6 Aug 2019 14:14:33 +0800 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> Message-ID: <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> Hi Ravishankar, Thanks for your share, it's very useful to me. I am setting up a glusterfs storage cluster recently and the umount/mount recovering process bothered me. I happened to find some patches[1] from internet aiming to address such a problem but no idea why they were not managed to merge into glusterfs mainline. Do you know why? Thanks, Changwei [1]: https://review.gluster.org/#/c/glusterfs/+/16843/ https://github.com/gluster/glusterfs/issues/242 On 2019/8/6 1:12 ??, Ravishankar N wrote: > On 05/08/19 3:31 PM, Changwei Ge wrote: >> Hi list, >> >> If somehow, glusterfs client fuse process dies. All subsequent file >> operations will be failed with error 'no connection'. >> >> I am curious if the only way to recover is umount and mount again? > Yes, this is pretty much the case with all fuse based file systems. > You can use -o auto_unmount (https://review.gluster.org/#/c/17230/) to > automatically cleanup and not having to manually unmount. >> >> If so, that means all processes working on top of glusterfs have to >> close files, which sometimes is hard to be acceptable. > > There is > https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, > which claims to provide a framework for transparent failovers.? I > can't find any publicly available code though. > > Regards, > Ravi >> >> >> Thanks, >> >> Changwei >> >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> From ravishankar at redhat.com Tue Aug 6 06:57:25 2019 From: ravishankar at redhat.com (Ravishankar N) Date: Tue, 6 Aug 2019 12:27:25 +0530 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> Message-ID: <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> On 06/08/19 11:44 AM, Changwei Ge wrote: > Hi Ravishankar, > > > Thanks for your share, it's very useful to me. > > I am setting up a glusterfs storage cluster recently and the > umount/mount recovering process bothered me. Hi Changwei, Why are you needing to do frequent remounts? If your gluster fuse client is crashing frequently, that should be investigated and fixed. If you have a reproducer, please raise a bug with all the details like the glusterfs version, core files and log files. Regards, Ravi > > > I happened to find some patches[1] from internet aiming to address > such a problem but no idea why they were not managed to merge into > glusterfs mainline. > > Do you know why? > > > Thanks, > > Changwei > > > [1]: > > https://review.gluster.org/#/c/glusterfs/+/16843/ > > https://github.com/gluster/glusterfs/issues/242 > > > On 2019/8/6 1:12 ??, Ravishankar N wrote: >> On 05/08/19 3:31 PM, Changwei Ge wrote: >>> Hi list, >>> >>> If somehow, glusterfs client fuse process dies. All subsequent file >>> operations will be failed with error 'no connection'. >>> >>> I am curious if the only way to recover is umount and mount again? >> Yes, this is pretty much the case with all fuse based file systems. >> You can use -o auto_unmount (https://review.gluster.org/#/c/17230/) >> to automatically cleanup and not having to manually unmount. >>> >>> If so, that means all processes working on top of glusterfs have to >>> close files, which sometimes is hard to be acceptable. >> >> There is >> https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, >> which claims to provide a framework for transparent failovers. I >> can't find any publicly available code though. >> >> Regards, >> Ravi >>> >>> >>> Thanks, >>> >>> Changwei >>> >>> >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> From chge at linux.alibaba.com Tue Aug 6 07:14:46 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Tue, 6 Aug 2019 15:14:46 +0800 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> Message-ID: <4d5e7a13-55e4-dfa2-2cf0-7f86afcabb3d@linux.alibaba.com> On 2019/8/6 2:57 ??, Ravishankar N wrote: > > On 06/08/19 11:44 AM, Changwei Ge wrote: >> Hi Ravishankar, >> >> >> Thanks for your share, it's very useful to me. >> >> I am setting up a glusterfs storage cluster recently and the >> umount/mount recovering process bothered me. > Hi Changwei, > Why are you needing to do frequent remounts? If your gluster fuse > client is crashing frequently, that should be investigated and fixed. > If you have a reproducer, please raise a bug with all the details like > the glusterfs version, core files and log files. Hi Ravi, Actually, glusterfs client fuse process ran well in my environment. But high-availability and fault-tolerance are also my big concerns. So I killed the fuse process to see what would happen. AFAIK, userspace processes are likely to be killed or crashed somehow, which is not under our control. :-( Another scenario is *software upgrade*. Since we have to upgrade glusterfs client version in order to enrich features and fix bugs.? It will be friendly to applications if the upgrade is transparent. Thanks, Changwei > Regards, > Ravi >> >> >> I happened to find some patches[1] from internet aiming to address >> such a problem but no idea why they were not managed to merge into >> glusterfs mainline. >> >> Do you know why? >> >> >> Thanks, >> >> Changwei >> >> >> [1]: >> >> https://review.gluster.org/#/c/glusterfs/+/16843/ >> >> https://github.com/gluster/glusterfs/issues/242 >> >> >> On 2019/8/6 1:12 ??, Ravishankar N wrote: >>> On 05/08/19 3:31 PM, Changwei Ge wrote: >>>> Hi list, >>>> >>>> If somehow, glusterfs client fuse process dies. All subsequent file >>>> operations will be failed with error 'no connection'. >>>> >>>> I am curious if the only way to recover is umount and mount again? >>> Yes, this is pretty much the case with all fuse based file systems. >>> You can use -o auto_unmount (https://review.gluster.org/#/c/17230/) >>> to automatically cleanup and not having to manually unmount. >>>> >>>> If so, that means all processes working on top of glusterfs have to >>>> close files, which sometimes is hard to be acceptable. >>> >>> There is >>> https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, >>> which claims to provide a framework for transparent failovers. I >>> can't find any publicly available code though. >>> >>> Regards, >>> Ravi >>>> >>>> >>>> Thanks, >>>> >>>> Changwei >>>> >>>> >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> From ndevos at redhat.com Tue Aug 6 07:50:30 2019 From: ndevos at redhat.com (Niels de Vos) Date: Tue, 6 Aug 2019 09:50:30 +0200 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: <4d5e7a13-55e4-dfa2-2cf0-7f86afcabb3d@linux.alibaba.com> References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> <4d5e7a13-55e4-dfa2-2cf0-7f86afcabb3d@linux.alibaba.com> Message-ID: <20190806075030.GA21914@ndevos-x270> On Tue, Aug 06, 2019 at 03:14:46PM +0800, Changwei Ge wrote: > On 2019/8/6 2:57 ??, Ravishankar N wrote: > > > > On 06/08/19 11:44 AM, Changwei Ge wrote: > > > Hi Ravishankar, > > > > > > > > > Thanks for your share, it's very useful to me. > > > > > > I am setting up a glusterfs storage cluster recently and the > > > umount/mount recovering process bothered me. > > Hi Changwei, > > Why are you needing to do frequent remounts? If your gluster fuse client > > is crashing frequently, that should be investigated and fixed. If you > > have a reproducer, please raise a bug with all the details like the > > glusterfs version, core files and log files. > > > Hi Ravi, > > Actually, glusterfs client fuse process ran well in my environment. But > high-availability and fault-tolerance are also my big concerns. > > So I killed the fuse process to see what would happen. AFAIK, userspace > processes are likely to be killed or crashed somehow, which is not under our > control. :-( > > Another scenario is *software upgrade*. Since we have to upgrade glusterfs > client version in order to enrich features and fix bugs.? It will be > friendly to applications if the upgrade is transparent. As open files have a state associated with them, and the state is lost when the fuse process exits. Restarting the fuse process will then need to restore the state of the open files (and caches, and more). This is not trivial and I do not think any work on this end has been done yet. Some users take an alternative route. Mounted filesystems have indeed issues with online updating. So, maybe you do not need to mount the filesystem at all. Depending on the need of your applications, using glusterfs-coreutils instead of a FUSE (or NFS) mount might be an option for you. The short living processes connect to the Gluster Volume when needed, and do not keep a connection open. Updating userspace tools is much simpler than long running processes that are hooked into the kernel. See https://github.com/gluster/glusterfs-coreutils for details. HTH, Niels > > > Thanks, > > Changwei > > > > Regards, > > Ravi > > > > > > > > > I happened to find some patches[1] from internet aiming to address > > > such a problem but no idea why they were not managed to merge into > > > glusterfs mainline. > > > > > > Do you know why? > > > > > > > > > Thanks, > > > > > > Changwei > > > > > > > > > [1]: > > > > > > https://review.gluster.org/#/c/glusterfs/+/16843/ > > > > > > https://github.com/gluster/glusterfs/issues/242 > > > > > > > > > On 2019/8/6 1:12 ??, Ravishankar N wrote: > > > > On 05/08/19 3:31 PM, Changwei Ge wrote: > > > > > Hi list, > > > > > > > > > > If somehow, glusterfs client fuse process dies. All > > > > > subsequent file operations will be failed with error 'no > > > > > connection'. > > > > > > > > > > I am curious if the only way to recover is umount and mount again? > > > > Yes, this is pretty much the case with all fuse based file > > > > systems. You can use -o auto_unmount > > > > (https://review.gluster.org/#/c/17230/) to automatically cleanup > > > > and not having to manually unmount. > > > > > > > > > > If so, that means all processes working on top of glusterfs > > > > > have to close files, which sometimes is hard to be > > > > > acceptable. > > > > > > > > There is > > > > https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, > > > > which claims to provide a framework for transparent failovers. I > > > > can't find any publicly available code though. > > > > > > > > Regards, > > > > Ravi > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > Changwei > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > Community Meeting Calendar: > > > > > > > > > > APAC Schedule - > > > > > Every 2nd and 4th Tuesday at 11:30 AM IST > > > > > Bridge: https://bluejeans.com/836554017 > > > > > > > > > > NA/EMEA Schedule - > > > > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > > > > Bridge: https://bluejeans.com/486278655 > > > > > > > > > > Gluster-devel mailing list > > > > > Gluster-devel at gluster.org > > > > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From chge at linux.alibaba.com Tue Aug 6 08:47:46 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Tue, 6 Aug 2019 16:47:46 +0800 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: <20190806075030.GA21914@ndevos-x270> References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> <4d5e7a13-55e4-dfa2-2cf0-7f86afcabb3d@linux.alibaba.com> <20190806075030.GA21914@ndevos-x270> Message-ID: Hi Niels, On 2019/8/6 3:50 ??, Niels de Vos wrote: > On Tue, Aug 06, 2019 at 03:14:46PM +0800, Changwei Ge wrote: >> On 2019/8/6 2:57 ??, Ravishankar N wrote: >>> On 06/08/19 11:44 AM, Changwei Ge wrote: >>>> Hi Ravishankar, >>>> >>>> >>>> Thanks for your share, it's very useful to me. >>>> >>>> I am setting up a glusterfs storage cluster recently and the >>>> umount/mount recovering process bothered me. >>> Hi Changwei, >>> Why are you needing to do frequent remounts? If your gluster fuse client >>> is crashing frequently, that should be investigated and fixed. If you >>> have a reproducer, please raise a bug with all the details like the >>> glusterfs version, core files and log files. >> >> Hi Ravi, >> >> Actually, glusterfs client fuse process ran well in my environment. But >> high-availability and fault-tolerance are also my big concerns. >> >> So I killed the fuse process to see what would happen. AFAIK, userspace >> processes are likely to be killed or crashed somehow, which is not under our >> control. :-( >> >> Another scenario is *software upgrade*. Since we have to upgrade glusterfs >> client version in order to enrich features and fix bugs.? It will be >> friendly to applications if the upgrade is transparent. > As open files have a state associated with them, and the state is lost > when the fuse process exits. Restarting the fuse process will then need > to restore the state of the open files (and caches, and more). This is > not trivial and I do not think any work on this end has been done yet. True, tons of work have to be done if we want to restore all files' state to make restarted fuse process continue to work as never be restarted. I suppose two methods might be feasible: ??? One is to try to fetch file state from kernel to restore files' state into fuse process, ??? the other one is to duplicate those? state to a standby process or just use Linux shared memory mechanism? > > Some users take an alternative route. Mounted filesystems have indeed > issues with online updating. So, maybe you do not need to mount the > filesystem at all. Depending on the need of your applications, using > glusterfs-coreutils instead of a FUSE (or NFS) mount might be an option > for you. The short living processes connect to the Gluster Volume when > needed, and do not keep a connection open. Updating userspace tools is > much simpler than long running processes that are hooked into the > kernel. > > See https://github.com/gluster/glusterfs-coreutils for details. That's helpful, but I think then some POSIX file operations can't be performed anymore. Thanks, Changwei > > HTH, > Niels > > >> >> Thanks, >> >> Changwei >> >> >>> Regards, >>> Ravi >>>> >>>> I happened to find some patches[1] from internet aiming to address >>>> such a problem but no idea why they were not managed to merge into >>>> glusterfs mainline. >>>> >>>> Do you know why? >>>> >>>> >>>> Thanks, >>>> >>>> Changwei >>>> >>>> >>>> [1]: >>>> >>>> https://review.gluster.org/#/c/glusterfs/+/16843/ >>>> >>>> https://github.com/gluster/glusterfs/issues/242 >>>> >>>> >>>> On 2019/8/6 1:12 ??, Ravishankar N wrote: >>>>> On 05/08/19 3:31 PM, Changwei Ge wrote: >>>>>> Hi list, >>>>>> >>>>>> If somehow, glusterfs client fuse process dies. All >>>>>> subsequent file operations will be failed with error 'no >>>>>> connection'. >>>>>> >>>>>> I am curious if the only way to recover is umount and mount again? >>>>> Yes, this is pretty much the case with all fuse based file >>>>> systems. You can use -o auto_unmount >>>>> (https://review.gluster.org/#/c/17230/) to automatically cleanup >>>>> and not having to manually unmount. >>>>>> If so, that means all processes working on top of glusterfs >>>>>> have to close files, which sometimes is hard to be >>>>>> acceptable. >>>>> There is >>>>> https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, >>>>> which claims to provide a framework for transparent failovers. I >>>>> can't find any publicly available code though. >>>>> >>>>> Regards, >>>>> Ravi >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Changwei >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> >>>>>> Community Meeting Calendar: >>>>>> >>>>>> APAC Schedule - >>>>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>>>> Bridge: https://bluejeans.com/836554017 >>>>>> >>>>>> NA/EMEA Schedule - >>>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>>>> Bridge: https://bluejeans.com/486278655 >>>>>> >>>>>> Gluster-devel mailing list >>>>>> Gluster-devel at gluster.org >>>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>>>> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> From ndevos at redhat.com Tue Aug 6 09:35:04 2019 From: ndevos at redhat.com (Niels de Vos) Date: Tue, 6 Aug 2019 11:35:04 +0200 Subject: [Gluster-devel] [RFC] What if client fuse process crash? In-Reply-To: References: <4a513b5f-e11e-0137-a539-99c11828e070@redhat.com> <8bb1b31e-49b9-ebd5-b67e-fee108d8ff54@linux.alibaba.com> <897930d4-42f7-5001-775a-8e85fbf0ec9d@redhat.com> <4d5e7a13-55e4-dfa2-2cf0-7f86afcabb3d@linux.alibaba.com> <20190806075030.GA21914@ndevos-x270> Message-ID: <20190806093504.GA23319@ndevos-x270> On Tue, Aug 06, 2019 at 04:47:46PM +0800, Changwei Ge wrote: > Hi Niels, > > On 2019/8/6 3:50 ??, Niels de Vos wrote: > > On Tue, Aug 06, 2019 at 03:14:46PM +0800, Changwei Ge wrote: > > > On 2019/8/6 2:57 ??, Ravishankar N wrote: > > > > On 06/08/19 11:44 AM, Changwei Ge wrote: > > > > > Hi Ravishankar, > > > > > > > > > > > > > > > Thanks for your share, it's very useful to me. > > > > > > > > > > I am setting up a glusterfs storage cluster recently and the > > > > > umount/mount recovering process bothered me. > > > > Hi Changwei, > > > > Why are you needing to do frequent remounts? If your gluster fuse client > > > > is crashing frequently, that should be investigated and fixed. If you > > > > have a reproducer, please raise a bug with all the details like the > > > > glusterfs version, core files and log files. > > > > > > Hi Ravi, > > > > > > Actually, glusterfs client fuse process ran well in my environment. But > > > high-availability and fault-tolerance are also my big concerns. > > > > > > So I killed the fuse process to see what would happen. AFAIK, userspace > > > processes are likely to be killed or crashed somehow, which is not under our > > > control. :-( > > > > > > Another scenario is *software upgrade*. Since we have to upgrade glusterfs > > > client version in order to enrich features and fix bugs.? It will be > > > friendly to applications if the upgrade is transparent. > > As open files have a state associated with them, and the state is lost > > when the fuse process exits. Restarting the fuse process will then need > > to restore the state of the open files (and caches, and more). This is > > not trivial and I do not think any work on this end has been done yet. > > > True, tons of work have to be done if we want to restore all files' state to > make restarted fuse process continue to work as never be restarted. > > I suppose two methods might be feasible: > > ??? One is to try to fetch file state from kernel to restore files' state > into fuse process, > > ??? the other one is to duplicate those? state to a standby process or just > use Linux shared memory mechanism? Restoring the state from the kernel would be my preference. That is the view of the storage that the application has as well. But it may not be possible to recover all details that the xlators track. Storing those in shared memory (or file backed persistent storage) might not even be sufficient. With upgrades it is possible to get new features in existing xlators that would need to refresh their state to get the extensions. It is even possible that new xlators get added, and those will need to get the state of the files too. I think, in the end it would boil down to getting the state from the kernel, and revalidating each inode through the mountpoint to the server. This is also what happens on graph-switches (new volume layout or options pushed from the server to client). To get this to work, it needs to be possible for a FUSE service to re-attach itself to a mountpoint where the previous FUSE process detached. I do not think this is possible at the moment, it will require extensions in the FUSE kernel module (and then re-attaching a new state to all inodes). > > Some users take an alternative route. Mounted filesystems have indeed > > issues with online updating. So, maybe you do not need to mount the > > filesystem at all. Depending on the need of your applications, using > > glusterfs-coreutils instead of a FUSE (or NFS) mount might be an option > > for you. The short living processes connect to the Gluster Volume when > > needed, and do not keep a connection open. Updating userspace tools is > > much simpler than long running processes that are hooked into the > > kernel. > > > > See https://github.com/gluster/glusterfs-coreutils for details. > > > That's helpful, but I think then some POSIX file operations can't be > performed anymore. Indeed, glusterfs-coreutils is more of an object storage interface than a POSIX complaint filesystem. Niels > > > Thanks, > > Changwei > > > > > > HTH, > > Niels > > > > > > > > > > Thanks, > > > > > > Changwei > > > > > > > > > > Regards, > > > > Ravi > > > > > > > > > > I happened to find some patches[1] from internet aiming to address > > > > > such a problem but no idea why they were not managed to merge into > > > > > glusterfs mainline. > > > > > > > > > > Do you know why? > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > Changwei > > > > > > > > > > > > > > > [1]: > > > > > > > > > > https://review.gluster.org/#/c/glusterfs/+/16843/ > > > > > > > > > > https://github.com/gluster/glusterfs/issues/242 > > > > > > > > > > > > > > > On 2019/8/6 1:12 ??, Ravishankar N wrote: > > > > > > On 05/08/19 3:31 PM, Changwei Ge wrote: > > > > > > > Hi list, > > > > > > > > > > > > > > If somehow, glusterfs client fuse process dies. All > > > > > > > subsequent file operations will be failed with error 'no > > > > > > > connection'. > > > > > > > > > > > > > > I am curious if the only way to recover is umount and mount again? > > > > > > Yes, this is pretty much the case with all fuse based file > > > > > > systems. You can use -o auto_unmount > > > > > > (https://review.gluster.org/#/c/17230/) to automatically cleanup > > > > > > and not having to manually unmount. > > > > > > > If so, that means all processes working on top of glusterfs > > > > > > > have to close files, which sometimes is hard to be > > > > > > > acceptable. > > > > > > There is > > > > > > https://research.cs.wisc.edu/wind/Publications/refuse-eurosys11.html, > > > > > > which claims to provide a framework for transparent failovers. I > > > > > > can't find any publicly available code though. > > > > > > > > > > > > Regards, > > > > > > Ravi > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > Changwei > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > > > > > > Community Meeting Calendar: > > > > > > > > > > > > > > APAC Schedule - > > > > > > > Every 2nd and 4th Tuesday at 11:30 AM IST > > > > > > > Bridge: https://bluejeans.com/836554017 > > > > > > > > > > > > > > NA/EMEA Schedule - > > > > > > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > > > > > > Bridge: https://bluejeans.com/486278655 > > > > > > > > > > > > > > Gluster-devel mailing list > > > > > > > Gluster-devel at gluster.org > > > > > > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > > > > > > > _______________________________________________ > > > > > > Community Meeting Calendar: > > > > > > APAC Schedule - > > > Every 2nd and 4th Tuesday at 11:30 AM IST > > > Bridge: https://bluejeans.com/836554017 > > > > > > NA/EMEA Schedule - > > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > > Bridge: https://bluejeans.com/486278655 > > > > > > Gluster-devel mailing list > > > Gluster-devel at gluster.org > > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > From kkeithle at redhat.com Wed Aug 7 17:38:09 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Wed, 7 Aug 2019 13:38:09 -0400 Subject: [Gluster-devel] Important: Debian and Ubuntu packages are changing Message-ID: *TL;DNR: *updates from glusterfs-5.8 to glusterfs-5.9 and from glusterfs-6.4 to glusterfs-6.5, ? using the package repos on https://download.gluster.org or the Gluster PPA on Launchpad? on buster, bullseye/sid, and some Ubuntu releases may not work, or may not work smoothly. Consider yourself warned. Plan accordingly. *Longer Answer*: updates from glusterfs-5.8 to glusterfs-5.9 and from glusterfs-6.4 to glusterfs-6.5, ? using the package repos on https://download.gluster.org or the Gluster PPA on Launchpad ? on buster, bullseye, and some Ubuntu releases may not work, or may not work smoothly. *Why*: The original packaging bits were contributed by the Debian maintainer of GlusterFS. For those that know Debian packaging, these did not follow normal Debian packaging conventions and best practices. Recently ? for some definition of recent ? the powers that be in Debian apparentl insisted that the packaging actually start to follow the conventions and best practices, and the packaging bits were rewritten for Debian. The only problem is that nobody bothered to notify the Gluster Community that this was happening. Nor did they send their new bits to GlusterFS. We were left to find out about it the hard way. *The Issue*: people who have used the packages from https://download.gluster.org are experiencing issues updating other software that depends on glusterfs. *The Change*: Gluster Community packages will now be built using packaging bits derived from the Debian packaging bits, which now follow Debian packaging conventions and best practices. *Conclusion*: This may be painful, but it's better in the long run for everyone. The volunteers who generously build packages in their copious spare time for the community appreciate your patience and understanding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkeithle at redhat.com Wed Aug 7 19:14:26 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Wed, 7 Aug 2019 15:14:26 -0400 Subject: [Gluster-devel] Important: Debian and Ubuntu packages are changing In-Reply-To: References: Message-ID: On Wed, Aug 7, 2019 at 1:38 PM Kaleb Keithley wrote: > *... *and some Ubuntu releases > Specifically Ubuntu Disco and Eoan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgowtham at redhat.com Fri Aug 9 11:00:12 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Fri, 9 Aug 2019 16:30:12 +0530 Subject: [Gluster-devel] Change in the release Schedule. Message-ID: Hi, We have been doing the minor release of series 5 and 6 on the same date which is 10th of every month. Doing both on the same day is a lot of work together. To make it easier, we are going to have the series 6 release on 30th of every month. Note: 5 is on 10th and 4.1 is on 20th and once we have 7, we will end of life 4.1 and do 7 on 20th. The schedule will soon be updated on: https://www.gluster.org/release-schedule/ -- Regards, Hari Gowtham. From kkeithle at redhat.com Fri Aug 9 13:50:59 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Fri, 9 Aug 2019 09:50:59 -0400 Subject: [Gluster-devel] [Gluster-users] Important: Debian and Ubuntu packages are changing In-Reply-To: References: Message-ID: *On Thu, Aug 8, 2019 at 4:56 PM Ingo Fischer wrote: > Hi Kaleb, > > I'm currently experiencing this issue while trying to upgrade my Proxmox > servers where gluster is installed too. > > Thank you for the official information for the community, but what > exactly do this mean? > > Will upgrades from 5.8 to 5.9 work or what exactly needs to be done in > order to get the update done? > I expect they will work as well as updating from, e.g., gluster's old style glusterfs_5.4 debs to debian's new style glusterfs_5.5 debs. IOW probably not very well. My guess is that you will probably need to uninstall 5.8 followed by installing 5.9. Here at Red Hat, as one might guess, we don't use a lot of Debian or Ubuntu. My experience with Debian and Ubuntu has been limited to building the packages. (FWIW, in a previous job I used SLES and OpenSuSE, and before that I used Slackware.) These are "community" packages and they're free. I personally do feel like the community really should shoulder some of the burden to test them and report any problems. Give them a try. Let us know what does or doesn't work. And send PRs. Debian Stretch is not affected? > TL;DNR: if it was, I would have said so. ;-) The Debian packager didn't change the packaging on stretch or bionic and xenial. The gluster community packages for those distributions are the same as they've always been. > > Thank you for additional information > > Ingo > > Am 07.08.19 um 19:38 schrieb Kaleb Keithley: > > *TL;DNR: *updates from glusterfs-5.8 to glusterfs-5.9 and from > > glusterfs-6.4 to glusterfs-6.5, ? using the package repos on > > https://download.gluster.org or the Gluster PPA on Launchpad? on > > buster, bullseye/sid, and some Ubuntu releases may not work, or may not > > work smoothly. Consider yourself warned. Plan accordingly. > > > > *Longer Answer*: updates from glusterfs-5.8 to glusterfs-5.9 and from > > glusterfs-6.4 to glusterfs-6.5, ? using the package repos on > > https://download.gluster.org or the Gluster PPA on Launchpad ? on > > buster, bullseye, and some Ubuntu releases may not work, or may not work > > smoothly. > > > > *Why*: The original packaging bits were contributed by the Debian > > maintainer of GlusterFS. For those that know Debian packaging, these did > > not follow normal Debian packaging conventions and best practices. > > Recently ? for some definition of recent ? the powers that be in Debian > > apparentl insisted that the packaging actually start to follow the > > conventions and best practices, and the packaging bits were rewritten > > for Debian. The only problem is that nobody bothered to notify the > > Gluster Community that this was happening. Nor did they send their new > > bits to GlusterFS. We were left to find out about it the hard way. > > > > *The Issue*: people who have used the packages from > > https://download.gluster.org are experiencing issues updating other > > software that depends on glusterfs. > > > > *The Change*: Gluster Community packages will now be built using > > packaging bits derived from the Debian packaging bits, which now follow > > Debian packaging conventions and best practices. > > > > *Conclusion*: This may be painful, but it's better in the long run for > > everyone. The volunteers who generously build packages in their copious > > spare time for the community appreciate your patience and understanding. > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Aug 12 01:45:04 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 12 Aug 2019 01:45:04 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <450093515.58.1565574304416.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1733667 / bitrot: glusterfs brick process core https://bugzilla.redhat.com/1731041 / build: GlusterFS fails on RHEL-8 during build. https://bugzilla.redhat.com/1734692 / core: brick process coredump while running bug-1432542-mpx-restart-crash.t in a virtual machine https://bugzilla.redhat.com/1738878 / core: FUSE client's memory leak https://bugzilla.redhat.com/1736564 / core: GlusterFS files missing randomly. https://bugzilla.redhat.com/1730565 / geo-replication: Geo-replication does not sync default ACL https://bugzilla.redhat.com/1736848 / glusterd: Execute the "gluster peer probe invalid_hostname" thread deadlock or the glusterd process crashes https://bugzilla.redhat.com/1734027 / glusterd: glusterd 6.4 memory leaks 2-3 GB per 24h (OOM) https://bugzilla.redhat.com/1739320 / glusterd: The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not. https://bugzilla.redhat.com/1736481 / posix: capture stat failure error while setting the gfid https://bugzilla.redhat.com/1731067 / project-infrastructure: Need nightly build for release 7 branch https://bugzilla.redhat.com/1738778 / project-infrastructure: Unable to setup softserve VM https://bugzilla.redhat.com/1739884 / transport: glusterfsd process crashes with SIGSEGV [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 1705 bytes Desc: not available URL: From hgowtham at redhat.com Mon Aug 12 07:08:02 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 12 Aug 2019 12:38:02 +0530 Subject: [Gluster-devel] Announcing Gluster release 6.5 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 6.5 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: None Thanks, Gluster community [1] Packages for 6.5: https://download.gluster.org/pub/gluster/glusterfs/6/6.5/ [2] Release notes for 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -- Regards, Hari Gowtham. From hgowtham at redhat.com Mon Aug 12 09:38:36 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 12 Aug 2019 15:08:36 +0530 Subject: [Gluster-devel] Announcing Gluster release 5.9 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 5.9 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: None Thanks, Gluster community [1] Packages for 5.9: https://download.gluster.org/pub/gluster/glusterfs/5/5.9/ [2] Release notes for 5.9: https://docs.gluster.org/en/latest/release-notes/5.9/ -- Regards, Hari Gowtham. From flyxiaoyu at gmail.com Mon Aug 12 12:37:06 2019 From: flyxiaoyu at gmail.com (Frank Yu) Date: Mon, 12 Aug 2019 20:37:06 +0800 Subject: [Gluster-devel] =?utf-8?q?=E3=80=90replace-brick_failed_but_make_?= =?utf-8?q?there=E2=80=99re_two_same_client-id_of_the_gluster_clust?= =?utf-8?q?er=2C_which_lead_can=E2=80=99t_mount_the_gluster_anymore?= =?utf-8?b?44CR?= Message-ID: Hi guys, I met a terrible situations need all your helps. I have a production cluster running well at first. the version of gluster is 3.12.15-1.el7.x86_64, the cluster has 12 nodes, 12 brick(disk) per nodes, there is one distributed-replicate volume, with 144 bricks(48 * 3). then there is a node crushed(the node named nodeA), and all it?s disk can?t be used anymore, but since the os of nodes run on kvm machine, so it came back with 12 new disks. I try to replace the first brick of nodeA with cmd ?gluster volume replace-brick VOLUMENAME nodeA:/mnt/data-1/data nodeA:/mnt/data-1/data01 commit force?, after some times, it failed with error ?Error : Request timed out?. here came the problem, both ?nodeA:/mnt/data-1/data? and ?nodeA:/mnt/data-1/data01? show in the output of cmd ?gluster volume info? When I try to mount gluster to client with fuse, it report error like below: [2019-08-12 12:27:42.395440] I [MSGID: 100030] [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.15 (args: /usr/sbin/glusterfs --volfile-server=xxxxx --volfile-id=/training-data-ali /mnt/glusterfs) [2019-08-12 12:27:42.400015] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2019-08-12 12:27:42.404994] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 *[2019-08-12 12:27:42.415971] E [MSGID: 101179] [graph.y:153:new_volume] 0-parser: Line 1381: volume ?VOLUME-NAME-client-74' defined again* [2019-08-12 12:27:42.416124] E [MSGID: 100026] [glusterfsd.c:2358:glusterfs_process_volfp] 0-: failed to construct the graph [2019-08-12 12:27:42.416376] E [graph.c:1102:glusterfs_graph_destroy] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x532) [0x55898e35e092] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x150) [0x55898e357da0] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f95f7318754] ) 0-graph: invalid argument: graph [Invalid argument] [2019-08-12 12:27:42.416425] W [glusterfsd.c:1375:cleanup_and_exit] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x532) [0x55898e35e092] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x163) [0x55898e357db3] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55898e35732b] ) 0-: received signum (-1), shutting down [2019-08-12 12:27:42.416455] I [fuse-bridge.c:5852:fini] 0-fuse: Unmounting '/mnt/glusterfs'. [2019-08-12 12:27:42.429655] I [fuse-bridge.c:5857:fini] 0-fuse: Closing fuse connection to '/mnt/glusterfs-aliyun'. [2019-08-12 12:27:42.429759] W [glusterfsd.c:1375:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7f95f6140e25] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55898e3574b5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55898e35732b] ) 0-: received signum (15), shutting down So, how can I solve error *?Line 1381: volume ?VOLUME-NAME-client-74' defined again? * -- Regards Frank Yu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunkumar at redhat.com Tue Aug 13 05:33:25 2019 From: sunkumar at redhat.com (sunkumar at redhat.com) Date: Tue, 13 Aug 2019 05:33:25 +0000 Subject: [Gluster-devel] Invitation: Gluster Community Meeting @ Tue Aug 13, 2019 11:30am - 12:25pm (IST) (gluster-devel@gluster.org) Message-ID: <000000000000bf02f7058ff8fa04@google.com> You have been invited to the following event. Title: Gluster Community Meeting BJ: https://bluejeans.com/836554017 Record your meeting minutes in https://hackmd.io/PEnYhQziQsyBwhMksbRWUw When: Tue Aug 13, 2019 11:30am ? 12:25pm India Standard Time - Kolkata Where: https://bluejeans.com/836554017 Calendar: gluster-devel at gluster.org Who: * atumball at redhat.com - organizer * gluster-users at gluster.org * gluster-devel at gluster.org * pkarampu at redhat.com - optional * jthottan at redhat.com - optional * moagrawa at redhat.com - optional * amarts at gmail.com - optional * skoduri at redhat.com - optional * khiremat at redhat.com - optional * nbalacha at redhat.com - optional * aspandey at redhat.com - optional * sunkumar at redhat.com - optional * srakonde at redhat.com - optional * achiraya at redhat.com - optional * sankar at redhat.com - optional * kdhananj at redhat.com - optional * rgowdapp at redhat.com - optional * sacharya at redhat.com - optional * ranaraya at redhat.com - optional * chenk at redhat.com - optional * avishwan at redhat.com - optional * pgurusid at redhat.com - optional * spamecha at redhat.com - optional * hgowtham at redhat.com - optional * rkavunga at redhat.com - optional * amukherj at redhat.com - optional * jahernan at redhat.com - optional * ksubrahm at redhat.com - optional Event details: https://www.google.com/calendar/event?action=VIEW&eid=NGd1MjIxOHFxNTRjZm1jN3NkbGZwMm10N2NfMjAxOTA4MTNUMDYwMDAwWiBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTkjYXR1bWJhbGxAcmVkaGF0LmNvbTdjYzc2NTE5ZDc4MGJjY2QyNTM3NTVkODk0YjBkYzY2NzY5OWYwNGQ&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5574 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 5672 bytes Desc: not available URL: From rkothiya at redhat.com Tue Aug 13 07:05:33 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Tue, 13 Aug 2019 12:35:33 +0530 Subject: [Gluster-devel] URGENT: Release 7 blocked due to patches failing centos regression Message-ID: Hi Team, The following patches posted in release 7 are failing centos regression. For some of the patches I have run "recheck centos" multiple times and each time we see different failures. So I am not sure if this is related to the patch or is a spurious failure. But some of the patches are consistently failing on "brick-mux-validation.t". Please advice as release 7 is blocked due to this. ====================================================== https://review.gluster.org/#/c/glusterfs/+/23195/ ====================================================== Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7348/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/bug-1595320.t 0 test(s) generated core >>> Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7361/ : FAILURE <<< 1 test(s) failed ./tests/bugs/core/bug-1119582.t 0 test(s) generated core >>> ====================================================== https://review.gluster.org/#/c/glusterfs/+/23196/ ====================================================== Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7349/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t 0 test(s) generated core >>> Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7362/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t 0 test(s) generated core >>> ========================================================== https://review.gluster.org/#/c/glusterfs/+/23189/ ========================================================== Gluster Build System Aug 9 6:38 PM Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7341/ : FAILURE <<< 1 test(s) failed ./tests/basic/volume-snapshot.t 0 test(s) generated core >>> Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7364/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t 0 test(s) generated core >>> Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7365/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t 0 test(s) generated core >>> ========================================================== https://review.gluster.org/#/c/glusterfs/+/23190/ ========================================================== Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7342/ : FAILURE <<< 1 test(s) failed ./tests/bugs/cli/bug-1077682.t 0 test(s) generated core >>> ========================================================== https://review.gluster.org/#/c/glusterfs/+/23188/ ========================================================== Patch Set 1: Build Failed https://build.gluster.org/job/centos7-regression/7340/ : FAILURE <<< 1 test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t 0 test(s) generated core >>> Regards Rinku -------------- next part -------------- An HTML attachment was scrubbed... URL: From chge at linux.alibaba.com Fri Aug 16 09:09:58 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Fri, 16 Aug 2019 17:09:58 +0800 Subject: [Gluster-devel] Glusterfs performance regression with quota enabled Message-ID: <577b78bd-6d47-ead8-274f-784fa1ac9975@linux.alibaba.com> Hi, I am using glusterfs-5.6 with quota enabled. I observed a obvious performance regression about 30% against a certain vdbench workload[1]. But I didn't set up a hard-limit or soft-limit to any particular volume or directory yet, just enable quota. After disabling features/quota, glusterfs performance went normal. Is it normal that qutoa will make the performance sharply drop? Thanks, Changwei [1]: messagescan=no fsd=fsd1,anchor=/mnt/q8,depth=1,width=1,files=50000,size=(32k,20,64k,30,128k,30,256k,20) fwd=fwd1,fsd=fsd1,operation=read,rdpct=80,xfersize=8k,fileio=random,fileselect=random,threads=8 rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=100,interval=1 From zgrep at 139.com Fri Aug 16 09:24:49 2019 From: zgrep at 139.com (=?utf-8?B?WGllIENoYW5nbG9uZw==?=) Date: 16 Aug 2019 17:24:49 +0800 Subject: [Gluster-devel] Glusterfs performance regression with quota enabled Message-ID: 2019081617244930780117@139.com> Hi changwei, i'm sure if you enable quota, the performance will sharply drop. There is a issue to implement glusterfs project quota just like xfs project quota, but no progress for a long time. Pls ref: https://github.com/gluster/glusterfs/issues/184 ???: Changwei Ge ??: 2019/08/16(???)17:09 ???: gluster-devel; ??: [Gluster-devel] Glusterfs performance regression with quota enabledHi, I am using glusterfs-5.6 with quota enabled. I observed a obvious performance regression about 30% against a certain vdbench workload[1]. But I didn't set up a hard-limit or soft-limit to any particular volume or directory yet, just enable quota. After disabling features/quota, glusterfs performance went normal. Is it normal that qutoa will make the performance sharply drop? Thanks, Changwei [1]: messagescan=no fsd=fsd1,anchor=/mnt/q8,depth=1,width=1,files=50000,size=(32k,20,64k,30,128k,30,256k,20) fwd=fwd1,fsd=fsd1,operation=read,rdpct=80,xfersize=8k,fileio=random,fileselect=random,threads=8 rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=100,interval=1 _______________________________________________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/836554017 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/486278655 Gluster-devel mailing list Gluster-devel at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Aug 19 01:45:03 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 19 Aug 2019 01:45:03 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <304537507.74.1566179103916.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1733667 / bitrot: glusterfs brick process core https://bugzilla.redhat.com/1734692 / core: brick process coredump while running bug-1432542-mpx-restart-crash.t in a virtual machine https://bugzilla.redhat.com/1738878 / core: FUSE client's memory leak https://bugzilla.redhat.com/1736564 / core: GlusterFS files missing randomly. https://bugzilla.redhat.com/1736848 / glusterd: Execute the "gluster peer probe invalid_hostname" thread deadlock or the glusterd process crashes https://bugzilla.redhat.com/1734027 / glusterd: glusterd 6.4 memory leaks 2-3 GB per 24h (OOM) https://bugzilla.redhat.com/1739320 / glusterd: The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not. https://bugzilla.redhat.com/1741899 / glusterd: the volume of occupied space in the bricks of gluster volume (3 nodes replica) differs on nodes and the healing does not fix it https://bugzilla.redhat.com/1741402 / posix-acl: READDIRP incorrectly updates posix-acl inode ctx https://bugzilla.redhat.com/1738778 / project-infrastructure: Unable to setup softserve VM https://bugzilla.redhat.com/1740413 / rpc: Gluster volume bricks crashes when running a security scan on glusterfs ports https://bugzilla.redhat.com/1739884 / transport: glusterfsd process crashes with SIGSEGV [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 1721 bytes Desc: not available URL: From amarts at gmail.com Tue Aug 20 10:30:49 2019 From: amarts at gmail.com (Amar Tumballi) Date: Tue, 20 Aug 2019 16:00:49 +0530 Subject: [Gluster-devel] URGENT: Release 7 blocked due to patches failing centos regression In-Reply-To: References: Message-ID: Looks like these issues are now fixed in master. Need a port to release-7 branch, and other patches has to be taken in. On Tue, Aug 13, 2019 at 12:35 PM Rinku Kothiya wrote: > Hi Team, > > The following patches posted in release 7 are failing centos regression. > For some of the patches I have run "recheck centos" multiple times and each > time we see different failures. So I am not sure if this is related to the > patch or is a spurious failure. But some of the patches are consistently > failing on "brick-mux-validation.t". Please advice as release 7 is blocked > due to this. > > ====================================================== > https://review.gluster.org/#/c/glusterfs/+/23195/ > ====================================================== > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7348/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/glusterd/bug-1595320.t > > 0 test(s) generated core > >>> > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7361/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/core/bug-1119582.t > > 0 test(s) generated core > >>> > > ====================================================== > https://review.gluster.org/#/c/glusterfs/+/23196/ > ====================================================== > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7349/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/glusterd/brick-mux-validation.t > > 0 test(s) generated core > >>> > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7362/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/glusterd/brick-mux-validation.t > > 0 test(s) generated core > >>> > > ========================================================== > https://review.gluster.org/#/c/glusterfs/+/23189/ > ========================================================== > > Gluster Build System > Aug 9 6:38 PM > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7341/ : FAILURE <<< > 1 test(s) failed > ./tests/basic/volume-snapshot.t > > 0 test(s) generated core > >>> > > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7364/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/glusterd/brick-mux-validation.t > > 0 test(s) generated core > >>> > > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7365/ : FAILURE <<< 1 > test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t > > 0 test(s) generated core > >>> > > ========================================================== > https://review.gluster.org/#/c/glusterfs/+/23190/ > ========================================================== > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7342/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/cli/bug-1077682.t > > 0 test(s) generated core > >>> > > ========================================================== > https://review.gluster.org/#/c/glusterfs/+/23188/ > ========================================================== > > Patch Set 1: > > Build Failed > > https://build.gluster.org/job/centos7-regression/7340/ : FAILURE <<< > 1 test(s) failed > ./tests/bugs/glusterd/brick-mux-validation.t > > 0 test(s) generated core > >>> > > > Regards > Rinku > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.kinney at gmail.com Tue Aug 20 11:38:27 2019 From: jim.kinney at gmail.com (Jim Kinney) Date: Tue, 20 Aug 2019 07:38:27 -0400 Subject: [Gluster-devel] [Gluster-users] du output showing corrupt file system In-Reply-To: References: Message-ID: <2AD0B4D8-A73F-4A26-B090-20B8F5786AA0@gmail.com> That's not necessarily a gluster issue. Users can create symlinks from a subdirectory up to a parent and that will create a loop. On August 20, 2019 2:22:44 AM EDT, Amudhan P wrote: >Hi, > >Can anyone suggest what could be the error and to fix this issue? > >regards >Amudhan P > >On Sat, Aug 17, 2019 at 6:59 PM Amudhan P wrote: > >> Hi, >> >> I am using Gluster version 3.10.1. >> >> Mounting volume through fuse mount and I have run the command du -hs >> "directory" which holds many subdirectories. >> some of the subdirectory given output with below message. >> >> du: WARNING: Circular directory structure. >> This almost certainly means that you have a corrupted file system. >> NOTIFY YOUR SYSTEM MANAGER. >> The following directory is part of the cycle: >> >> what could be the issue or what should be done to fix this problem? >> >> regards >> Amudhan P >> >> -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.kinney at gmail.com Wed Aug 21 12:52:32 2019 From: jim.kinney at gmail.com (Jim Kinney) Date: Wed, 21 Aug 2019 08:52:32 -0400 Subject: [Gluster-devel] [Gluster-users] du output showing corrupt file system In-Reply-To: References: <2AD0B4D8-A73F-4A26-B090-20B8F5786AA0@gmail.com> Message-ID: Run the du command in the source space. A symlink that uses relative pathing can turn into a problem on a new mount. That said, I've seen "too many levels of linking" errors associated with the gfids dir .glusterfs and gfids of real dirs that are chained links to other dirs. It's still a user space symlink error. It's just compounded by gluster. On August 21, 2019 3:49:45 AM EDT, Amudhan P wrote: >it is definitely issue with gluster there is no symlink involved. > > >On Tue, Aug 20, 2019 at 5:08 PM Jim Kinney >wrote: > >> That's not necessarily a gluster issue. Users can create symlinks >from a >> subdirectory up to a parent and that will create a loop. >> >> >> On August 20, 2019 2:22:44 AM EDT, Amudhan P >wrote: >>> >>> Hi, >>> >>> Can anyone suggest what could be the error and to fix this issue? >>> >>> regards >>> Amudhan P >>> >>> On Sat, Aug 17, 2019 at 6:59 PM Amudhan P >wrote: >>> >>>> Hi, >>>> >>>> I am using Gluster version 3.10.1. >>>> >>>> Mounting volume through fuse mount and I have run the command du >-hs >>>> "directory" which holds many subdirectories. >>>> some of the subdirectory given output with below message. >>>> >>>> du: WARNING: Circular directory structure. >>>> This almost certainly means that you have a corrupted file system. >>>> NOTIFY YOUR SYSTEM MANAGER. >>>> The following directory is part of the cycle: >>>> >>>> what could be the issue or what should be done to fix this problem? >>>> >>>> regards >>>> Amudhan P >>>> >>>> >> -- >> Sent from my Android device with K-9 Mail. All tyopes are thumb >related >> and reflect authenticity. >> -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.kinney at gmail.com Thu Aug 22 11:28:40 2019 From: jim.kinney at gmail.com (Jim Kinney) Date: Thu, 22 Aug 2019 07:28:40 -0400 Subject: [Gluster-devel] [Gluster-users] du output showing corrupt file system In-Reply-To: References: <2AD0B4D8-A73F-4A26-B090-20B8F5786AA0@gmail.com> Message-ID: Try running du not in the fuse mounted folder but in the folder on the server providing it. On August 22, 2019 1:57:59 AM EDT, Amudhan P wrote: >Hi Jim, > >"du" command was run from fuse mounted volume. it's a single mount >point. > >Gluster should handle that issue right and I don't have any problem in >accessing issue reported folders but only when running "du" command for >the >folder it throws error msg. > >regards >Amudhan > > >On Wed, Aug 21, 2019 at 6:22 PM Jim Kinney >wrote: > >> Run the du command in the source space. >> >> A symlink that uses relative pathing can turn into a problem on a new >> mount. >> >> That said, I've seen "too many levels of linking" errors associated >with >> the gfids dir .glusterfs and gfids of real dirs that are chained >links to >> other dirs. It's still a user space symlink error. It's just >compounded by >> gluster. >> >> On August 21, 2019 3:49:45 AM EDT, Amudhan P >wrote: >>> >>> it is definitely issue with gluster there is no symlink involved. >>> >>> >>> On Tue, Aug 20, 2019 at 5:08 PM Jim Kinney >wrote: >>> >>>> That's not necessarily a gluster issue. Users can create symlinks >from a >>>> subdirectory up to a parent and that will create a loop. >>>> >>>> >>>> On August 20, 2019 2:22:44 AM EDT, Amudhan P >>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> Can anyone suggest what could be the error and to fix this issue? >>>>> >>>>> regards >>>>> Amudhan P >>>>> >>>>> On Sat, Aug 17, 2019 at 6:59 PM Amudhan P >wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I am using Gluster version 3.10.1. >>>>>> >>>>>> Mounting volume through fuse mount and I have run the command du >-hs >>>>>> "directory" which holds many subdirectories. >>>>>> some of the subdirectory given output with below message. >>>>>> >>>>>> du: WARNING: Circular directory structure. >>>>>> This almost certainly means that you have a corrupted file >system. >>>>>> NOTIFY YOUR SYSTEM MANAGER. >>>>>> The following directory is part of the cycle: >>>>>> >>>>>> what could be the issue or what should be done to fix this >problem? >>>>>> >>>>>> regards >>>>>> Amudhan P >>>>>> >>>>>> >>>> -- >>>> Sent from my Android device with K-9 Mail. All tyopes are thumb >related >>>> and reflect authenticity. >>>> >>> >> -- >> Sent from my Android device with K-9 Mail. All tyopes are thumb >related >> and reflect authenticity. >> -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chge at linux.alibaba.com Thu Aug 22 12:26:23 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Thu, 22 Aug 2019 20:26:23 +0800 Subject: [Gluster-devel] [RFC] alter inode table lock from mutex to rwlock Message-ID: <5ef42e80-d225-a45f-b952-41a7ea058358@linux.alibaba.com> Hi, Now inode_table_t:lock is type of mutex which I think we can use ?pthread_rwlock' to replace it for a better concurrency. Because phread_rwlock allows more than one thread accessing inode table at the same time. Moreover, the critical section the lock is protecting won't take many CPU cycles and no I/O and CPU fault/exception involved after a quick glance at glusterfs code. I hope I didn't miss something. If I would get an ACK from major glusterfs developer, I will try to do it. Thanks. From atumball at redhat.com Thu Aug 22 12:48:38 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Thu, 22 Aug 2019 18:18:38 +0530 Subject: [Gluster-devel] [RFC] alter inode table lock from mutex to rwlock In-Reply-To: <5ef42e80-d225-a45f-b952-41a7ea058358@linux.alibaba.com> References: <5ef42e80-d225-a45f-b952-41a7ea058358@linux.alibaba.com> Message-ID: Hi Changwei Ge, On Thu, Aug 22, 2019 at 5:57 PM Changwei Ge wrote: > Hi, > > Now inode_table_t:lock is type of mutex which I think we can use > ?pthread_rwlock' to replace it for a better concurrency. > > Because phread_rwlock allows more than one thread accessing inode table > at the same time. > Moreover, the critical section the lock is protecting won't take many > CPU cycles and no I/O and CPU fault/exception involved after a quick > glance at glusterfs code. > I hope I didn't miss something. > If I would get an ACK from major glusterfs developer, I will try to do it. > > You are right. I believe this is possible. No harm in trying this out. Xavier, Raghavendra, Pranith, Nithya, do you think this is possible? Regards, > Thanks. > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- Amar Tumballi (amarts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nux at li.nux.ro Thu Aug 22 16:20:28 2019 From: nux at li.nux.ro (Nux!) Date: Thu, 22 Aug 2019 17:20:28 +0100 Subject: [Gluster-devel] geo-replication won't start Message-ID: <143d8bdc905173dd3743f45e67ebf8ee@li.nux.ro> Hi, I'm trying for the first time ever the geo-replication feature and I am not having much success (CentOS7, gluster 6.5). First of all, from the docs I get the impression that I can geo-replicate over ssh to a simple dir, but it doesn't seem to be the case, the "slave" must be a gluster volume, doesn't it? Second, the slave host is not in the subnet with the other gluster peers, but I reckon this would be the usual case and not a problem. I've stopped the firewall on all peers and slave host to rule it out, but I can't get the georep started. Creation is successfull, however STATUS won't change from Created. I'm looking through all the logs and I can't see anything meaningful. What steps could I take to debug this further? Cheers, Lucian -- Sent from the Delta quadrant using Borg technology! From chge at linux.alibaba.com Fri Aug 23 02:02:06 2019 From: chge at linux.alibaba.com (Changwei Ge) Date: Fri, 23 Aug 2019 10:02:06 +0800 Subject: [Gluster-devel] [RFC] alter inode table lock from mutex to rwlock In-Reply-To: References: <5ef42e80-d225-a45f-b952-41a7ea058358@linux.alibaba.com> Message-ID: <11c78b57-42f3-7d41-0182-fe0431f15c9d@linux.alibaba.com> Hi Amar, Thanks for your reply, I will try it out then. ??? -Changwei On 2019/8/22 8:48 ??, Amar Tumballi Suryanarayan wrote: > Hi Changwei Ge, > > On Thu, Aug 22, 2019 at 5:57 PM Changwei Ge > wrote: > > Hi, > > Now inode_table_t:lock is type of mutex which I think we can use > ?pthread_rwlock' to replace it for a better concurrency. > > Because phread_rwlock allows more than one thread accessing inode > table > at the same time. > Moreover, the critical section the lock is protecting won't take many > CPU cycles and no I/O and CPU fault/exception involved after a quick > glance at glusterfs code. > I hope I didn't miss something. > If I would get an ACK from major glusterfs developer, I will try > to do it. > > > You are right. I believe this is possible. No harm in trying this out. > > Xavier, Raghavendra, Pranith, Nithya, do you think this is possible? > > Regards, > > Thanks. > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > -- > Amar Tumballi (amarts) From bmekala at redhat.com Fri Aug 23 05:48:33 2019 From: bmekala at redhat.com (Bala Konda Reddy Mekala) Date: Fri, 23 Aug 2019 11:18:33 +0530 Subject: [Gluster-devel] Upstream nightly build on Centos is failing with glusterd crash Message-ID: Hi, On fresh installation with the nightly build[1], "systemctl glusterd start" is crashing with a glusterd crash (coredump). Bug was filed[2] and centos-ci for glusto-tests is currently blocked because of the bug. Please look into it. Thanks, Bala [1] http://artifacts.ci.centos.org/gluster/nightly/master/7/x86_64/ [2] https://bugzilla.redhat.com/show_bug.cgi?id=1744420 -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarts at gmail.com Fri Aug 23 13:11:43 2019 From: amarts at gmail.com (Amar Tumballi) Date: Fri, 23 Aug 2019 18:41:43 +0530 Subject: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely Message-ID: Hi developers, With this email, I want to understand what is the general feeling around this topic. We from gluster org (in github.com/gluster) have many projects which follow complete github workflow, where as there are few, specially the main one 'glusterfs', which uses 'Gerrit'. While this has worked all these years, currently, there is a huge set of brain-share on github workflow as many other top projects, and similar projects use only github as the place to develop, track and run tests etc. As it is possible to have all of the tools required for this project in github itself (code, PR, issues, CI/CD, docs), lets look at how we are structured today: Gerrit - glusterfs code + Review system Bugzilla - For bugs Github - For feature requests Trello - (not very much used) for tracking project development. CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. Docs - glusterdocs - different repo. Metrics - Nothing (other than github itself tracking contributors). While it may cause a minor glitch for many long time developers who are used to the flow, moving to github would bring all these in single place, makes getting new users easy, and uniform development practices for all gluster org repositories. As it is just the proposal, I would like to hear people's thought on this, and conclude on this another month, so by glusterfs-8 development time, we are clear about this. Can we decide on this before September 30th? Please voice your concerns. Regards, Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sat Aug 24 03:56:53 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 23 Aug 2019 23:56:53 -0400 Subject: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On Fri, 23 Aug 2019, 9:13 Amar Tumballi wrote: > Hi developers, > > With this email, I want to understand what is the general feeling around > this topic. > > We from gluster org (in github.com/gluster) have many projects which > follow complete github workflow, where as there are few, specially the main > one 'glusterfs', which uses 'Gerrit'. > > While this has worked all these years, currently, there is a huge set of > brain-share on github workflow as many other top projects, and similar > projects use only github as the place to develop, track and run tests etc. > As it is possible to have all of the tools required for this project in > github itself (code, PR, issues, CI/CD, docs), lets look at how we are > structured today: > > Gerrit - glusterfs code + Review system > Bugzilla - For bugs > Github - For feature requests > Trello - (not very much used) for tracking project development. > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > Docs - glusterdocs - different repo. > Metrics - Nothing (other than github itself tracking contributors). > > While it may cause a minor glitch for many long time developers who are > used to the flow, moving to github would bring all these in single place, > makes getting new users easy, and uniform development practices for all > gluster org repositories. > > As it is just the proposal, I would like to hear people's thought on this, > and conclude on this another month, so by glusterfs-8 development time, we > are clear about this. > I don't like mixed mode, but I also dislike Github's code review tools, so I'd like to remind the option of using http://gerrithub.io/ for code review. Other than that, I'm in favor of moving over. Y. > Can we decide on this before September 30th? Please voice your concerns. > > Regards, > Amar > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sankarshan.mukhopadhyay at gmail.com Sat Aug 24 11:02:56 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Sat, 24 Aug 2019 16:32:56 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019 at 6:42 PM Amar Tumballi wrote: > > Hi developers, > > With this email, I want to understand what is the general feeling around this topic. > > We from gluster org (in github.com/gluster) have many projects which follow complete github workflow, where as there are few, specially the main one 'glusterfs', which uses 'Gerrit'. > > While this has worked all these years, currently, there is a huge set of brain-share on github workflow as many other top projects, and similar projects use only github as the place to develop, track and run tests etc. As it is possible to have all of the tools required for this project in github itself (code, PR, issues, CI/CD, docs), lets look at how we are structured today: > > Gerrit - glusterfs code + Review system > Bugzilla - For bugs > Github - For feature requests > Trello - (not very much used) for tracking project development. > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > Docs - glusterdocs - different repo. > Metrics - Nothing (other than github itself tracking contributors). > > While it may cause a minor glitch for many long time developers who are used to the flow, moving to github would bring all these in single place, makes getting new users easy, and uniform development practices for all gluster org repositories. > > As it is just the proposal, I would like to hear people's thought on this, and conclude on this another month, so by glusterfs-8 development time, we are clear about this. > I'd want to propose that a decision be arrived at much earlier. Say, within a fortnight ie. mid-Sep. I do not see why this would need a whole month to consider. Such a timeline would also allow to manage changes after proper assessment of sub-tasks. > Can we decide on this before September 30th? Please voice your concerns. > > Regards, > Amar From rtalur at redhat.com Sat Aug 24 11:38:38 2019 From: rtalur at redhat.com (Raghavendra Talur) Date: Sat, 24 Aug 2019 07:38:38 -0400 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019, 9:12 AM Amar Tumballi wrote: > Hi developers, > > With this email, I want to understand what is the general feeling around > this topic. > > We from gluster org (in github.com/gluster) have many projects which > follow complete github workflow, where as there are few, specially the main > one 'glusterfs', which uses 'Gerrit'. > > While this has worked all these years, currently, there is a huge set of > brain-share on github workflow as many other top projects, and similar > projects use only github as the place to develop, track and run tests etc. > As it is possible to have all of the tools required for this project in > github itself (code, PR, issues, CI/CD, docs), lets look at how we are > structured today: > > Gerrit - glusterfs code + Review system > Bugzilla - For bugs > Github - For feature requests > Trello - (not very much used) for tracking project development. > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > Docs - glusterdocs - different repo. > Metrics - Nothing (other than github itself tracking contributors). > > While it may cause a minor glitch for many long time developers who are > used to the flow, moving to github would bring all these in single place, > makes getting new users easy, and uniform development practices for all > gluster org repositories. > > As it is just the proposal, I would like to hear people's thought on this, > and conclude on this another month, so by glusterfs-8 development time, we > are clear about this. > A huge +1 to this proposal. As you said, github has wider mind share and new developers won't have to learn tooling to contribute to gluster. Thanks Raghavendra Talur > > Can we decide on this before September 30th? Please voice your concerns. > > Regards, > Amar > > > > _______________________________________________ > maintainers mailing list > maintainers at gluster.org > https://lists.gluster.org/mailman/listinfo/maintainers > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarts at gmail.com Sun Aug 25 04:23:23 2019 From: amarts at gmail.com (Amar Tumballi) Date: Sun, 25 Aug 2019 09:53:23 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On Sat, Aug 24, 2019 at 4:33 PM Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > On Fri, Aug 23, 2019 at 6:42 PM Amar Tumballi wrote: > > > > Hi developers, > > > > With this email, I want to understand what is the general feeling around > this topic. > > > > We from gluster org (in github.com/gluster) have many projects which > follow complete github workflow, where as there are few, specially the main > one 'glusterfs', which uses 'Gerrit'. > > > > While this has worked all these years, currently, there is a huge set of > brain-share on github workflow as many other top projects, and similar > projects use only github as the place to develop, track and run tests etc. > As it is possible to have all of the tools required for this project in > github itself (code, PR, issues, CI/CD, docs), lets look at how we are > structured today: > > > > Gerrit - glusterfs code + Review system > > Bugzilla - For bugs > > Github - For feature requests > > Trello - (not very much used) for tracking project development. > > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > > Docs - glusterdocs - different repo. > > Metrics - Nothing (other than github itself tracking contributors). > > > > While it may cause a minor glitch for many long time developers who are > used to the flow, moving to github would bring all these in single place, > makes getting new users easy, and uniform development practices for all > gluster org repositories. > > > > As it is just the proposal, I would like to hear people's thought on > this, and conclude on this another month, so by glusterfs-8 development > time, we are clear about this. > > > > I'd want to propose that a decision be arrived at much earlier. Say, > within a fortnight ie. mid-Sep. I do not see why this would need a > whole month to consider. Such a timeline would also allow to manage > changes after proper assessment of sub-tasks. > > It would be great if we can decide sooner. I kept a month as timeline, as historically, I had not seen much responses to proposal like this. Would be great if we have at least 20+ people participating in this discussion. I am happy to create a poll if everyone prefers that. Regards, Amar > > Can we decide on this before September 30th? Please voice your concerns. > > > > Regards, > > Amar > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Aug 26 01:45:03 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 26 Aug 2019 01:45:03 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <756078512.90.1566783903987.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1734692 / core: brick process coredump while running bug-1432542-mpx-restart-crash.t in a virtual machine https://bugzilla.redhat.com/1743195 / core: can't start gluster after upgrade from 5 to 6 https://bugzilla.redhat.com/1738878 / core: FUSE client's memory leak https://bugzilla.redhat.com/1736564 / core: GlusterFS files missing randomly. https://bugzilla.redhat.com/1745026 / fuse: endless heal gluster volume; incrementing number of files to heal when all peers in volume are up https://bugzilla.redhat.com/1736848 / glusterd: Execute the "gluster peer probe invalid_hostname" thread deadlock or the glusterd process crashes https://bugzilla.redhat.com/1734027 / glusterd: glusterd 6.4 memory leaks 2-3 GB per 24h (OOM) https://bugzilla.redhat.com/1744420 / glusterd: glusterd crashing with core dump on the latest nightly builds. https://bugzilla.redhat.com/1743215 / glusterd: glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied] https://bugzilla.redhat.com/1744883 / glusterd: GlusterFS problem dataloss https://bugzilla.redhat.com/1739320 / glusterd: The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not. https://bugzilla.redhat.com/1741899 / glusterd: the volume of occupied space in the bricks of gluster volume (3 nodes replica) differs on nodes and the healing does not fix it https://bugzilla.redhat.com/1741402 / posix-acl: READDIRP incorrectly updates posix-acl inode ctx https://bugzilla.redhat.com/1744671 / project-infrastructure: Smoke is failing for the changeset https://bugzilla.redhat.com/1738778 / project-infrastructure: Unable to setup softserve VM https://bugzilla.redhat.com/1740968 / replicate: glustershd can not decide heald_sinks, and skip repair, so some entries lingering in volume heal info https://bugzilla.redhat.com/1740413 / rpc: Gluster volume bricks crashes when running a security scan on glusterfs ports https://bugzilla.redhat.com/1739884 / transport: glusterfsd process crashes with SIGSEGV [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2452 bytes Desc: not available URL: From rkothiya at redhat.com Mon Aug 26 05:47:10 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Mon, 26 Aug 2019 11:17:10 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] glusterfs-7.0rc0 released Message-ID: Hi, Release-7 RC0 packages are built. This is a good time to start testing the release bits, and reporting any issues on bugzilla. Do post on the lists any testing done and feedback for the same. We have about 2 weeks to GA of release-6 barring any major blockers uncovered during the test phase. Please take this time to help make the release effective, by testing the same. Packages for Fedora 29, Fedora 30, RHEL 8 are at https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc0/ Packages are signed. The public key is at https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub Packages for Stretch,Bullseye and CentOS7 will be there as soon as they get built. Regards Rinku -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbellur at redhat.com Mon Aug 26 06:34:32 2019 From: vbellur at redhat.com (Vijay Bellur) Date: Sun, 25 Aug 2019 23:34:32 -0700 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019 at 6:12 AM Amar Tumballi wrote: > Hi developers, > > With this email, I want to understand what is the general feeling around > this topic. > > We from gluster org (in github.com/gluster) have many projects which > follow complete github workflow, where as there are few, specially the main > one 'glusterfs', which uses 'Gerrit'. > > While this has worked all these years, currently, there is a huge set of > brain-share on github workflow as many other top projects, and similar > projects use only github as the place to develop, track and run tests etc. > As it is possible to have all of the tools required for this project in > github itself (code, PR, issues, CI/CD, docs), lets look at how we are > structured today: > > Gerrit - glusterfs code + Review system > Bugzilla - For bugs > Github - For feature requests > Trello - (not very much used) for tracking project development. > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > Docs - glusterdocs - different repo. > Metrics - Nothing (other than github itself tracking contributors). > > While it may cause a minor glitch for many long time developers who are > used to the flow, moving to github would bring all these in single place, > makes getting new users easy, and uniform development practices for all > gluster org repositories. > > As it is just the proposal, I would like to hear people's thought on this, > and conclude on this another month, so by glusterfs-8 development time, we > are clear about this. > > +1 to the idea. While we are at this, any more thoughts about consolidating IRC/Slack/gitter etc.? Thanks, Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravishankar at redhat.com Mon Aug 26 07:05:15 2019 From: ravishankar at redhat.com (Ravishankar N) Date: Mon, 26 Aug 2019 12:35:15 +0530 Subject: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: On 24/08/19 9:26 AM, Yaniv Kaul wrote: > I don't like mixed mode, but I also dislike Github's code review > tools, so I'd like to remind the option of using http://gerrithub.io/ > for code review. > Other than that, I'm in favor of moving over. > Y. +1 for using gerrithub for code review when we move to github. From bsasonro at redhat.com Mon Aug 26 07:37:21 2019 From: bsasonro at redhat.com (Barak Sason Rofman) Date: Mon, 26 Aug 2019 10:37:21 +0300 Subject: [Gluster-devel] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: Greetings all, As a new developer on the project, I might add a fresh look on the matter. Before I can here I was familiar with Github and unfamiliar with Gerrit. Understanding Gerrit itself wasn't too troublesome, but also not needed as I see no benefit of using that system, because as others suggested, solutions like Gerrithub exist. In general centralized workflow is always welcomed and personally I'd be happy to make the switch. +1 for me. On Mon, Aug 26, 2019 at 10:06 AM Ravishankar N wrote: > > On 24/08/19 9:26 AM, Yaniv Kaul wrote: > > I don't like mixed mode, but I also dislike Github's code review > > tools, so I'd like to remind the option of using http://gerrithub.io/ > > for code review. > > Other than that, I'm in favor of moving over. > > Y. > +1 for using gerrithub for code review when we move to github. > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- *Barak Sason Rofman* Gluster Storage Development Red Hat Israel 34 Jerusalem rd. Ra'anana, 43501 bsasonro at redhat.com T: *+972-9-7692304* M: *+972-52-4326355* -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Mon Aug 26 08:13:17 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Mon, 26 Aug 2019 13:43:17 +0530 Subject: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] glusterfs-7.0rc0 released In-Reply-To: References: Message-ID: On Mon, Aug 26, 2019 at 11:18 AM Rinku Kothiya wrote: > Hi, > > Release-7 RC0 packages are built. This is a good time to start testing the > release bits, and reporting any issues on bugzilla. > Do post on the lists any testing done and feedback for the same. > > We have about 2 weeks to GA of release-6 barring any major blockers > uncovered during the test phase. Please take this time to help make the > release effective, by testing the same. > I believe you meant release-7 here :-) I'd like to request that just like release-6, we pay some attention on the upgrade testing (release-4/release-5/release-6 to release-7) paths and report back issues here (along with bugzilla links). > Packages for Fedora 29, Fedora 30, RHEL 8 are at > https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc0/ > > Packages are signed. The public key is at > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub > > Packages for Stretch,Bullseye and CentOS7 will be there as soon as they > get built. > > Regards > Rinku > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Mon Aug 26 08:56:26 2019 From: ndevos at redhat.com (Niels de Vos) Date: Mon, 26 Aug 2019 10:56:26 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: Message-ID: <20190826085626.GA28580@ndevos-x270> On Fri, Aug 23, 2019 at 11:56:53PM -0400, Yaniv Kaul wrote: > On Fri, 23 Aug 2019, 9:13 Amar Tumballi wrote: > > > Hi developers, > > > > With this email, I want to understand what is the general feeling around > > this topic. > > > > We from gluster org (in github.com/gluster) have many projects which > > follow complete github workflow, where as there are few, specially the main > > one 'glusterfs', which uses 'Gerrit'. > > > > While this has worked all these years, currently, there is a huge set of > > brain-share on github workflow as many other top projects, and similar > > projects use only github as the place to develop, track and run tests etc. > > As it is possible to have all of the tools required for this project in > > github itself (code, PR, issues, CI/CD, docs), lets look at how we are > > structured today: > > > > Gerrit - glusterfs code + Review system > > Bugzilla - For bugs > > Github - For feature requests > > Trello - (not very much used) for tracking project development. > > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo. > > Docs - glusterdocs - different repo. > > Metrics - Nothing (other than github itself tracking contributors). > > > > While it may cause a minor glitch for many long time developers who are > > used to the flow, moving to github would bring all these in single place, > > makes getting new users easy, and uniform development practices for all > > gluster org repositories. > > > > As it is just the proposal, I would like to hear people's thought on this, > > and conclude on this another month, so by glusterfs-8 development time, we > > are clear about this. > > > > I don't like mixed mode, but I also dislike Github's code review tools, so > I'd like to remind the option of using http://gerrithub.io/ for code > review. > Other than that, I'm in favor of moving over. > Y. I agree that using GitHub for code review is not optimal. We have many patches for the GlusterFS project that need multiple rounds of review and corrections. Comparing the changes between revisions is something that GitHub does not support, but Gerrit/GerritHub does. Before switching over, there also needs to be documentation how to structure the issues in GitHubs tracker (which labels to use, what they mean etc,). Also, what about migration of bugs from Bugzilla to GitHub? Except for those topics, I don't have a problem with moving to GitHub. Niels From rkothiya at redhat.com Mon Aug 26 10:07:17 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Mon, 26 Aug 2019 15:37:17 +0530 Subject: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] glusterfs-7.0rc0 released In-Reply-To: References: Message-ID: Hi, I tried upgrading from glusterfs-server-6.5-0 to glusterfs-server-7.0-0.1rc0 without any problem on fedora 30. *Note for testing upgrade : * Glusterfs-6 rpms complied for fedora30. Glusterfs-7 downloaded the rpms from the site ( https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc0/Fedora/fedora-30/x86_64/). *Output : * # yum localinstall -y {glusterfs-cli,glusterfs-api,glusterfs-libs,glusterfs-client-xlators,glusterfs-fuse,glusterfs,glusterfs-server,python3-gluster,glusterfs-geo-replication} Last metadata expiration check: 0:08:52 ago on Mon 26 Aug 2019 03:10:38 PM IST. Dependencies resolved. ============================================================================================================================================================================ Package Architecture Version Repository Size ============================================================================================================================================================================ Upgrading: glusterfs-cli x86_64 7.0-0.1rc0.fc30 @commandline 182 k glusterfs-api x86_64 7.0-0.1rc0.fc30 @commandline 88 k glusterfs-libs x86_64 7.0-0.1rc0.fc30 @commandline 396 k glusterfs-client-xlators x86_64 7.0-0.1rc0.fc30 @commandline 809 k glusterfs-fuse x86_64 7.0-0.1rc0.fc30 @commandline 136 k glusterfs x86_64 7.0-0.1rc0.fc30 @commandline 611 k glusterfs-server x86_64 7.0-0.1rc0.fc30 @commandline 1.3 M python3-gluster x86_64 7.0-0.1rc0.fc30 @commandline 20 k glusterfs-geo-replication x86_64 7.0-0.1rc0.fc30 @commandline 179 k Transaction Summary ============================================================================================================================================================================ Upgrade 9 Packages Total size: 3.6 M Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: glusterfs-libs-7.0-0.1rc0.fc30.x86_64 Upgrading : glusterfs-libs-7.0-0.1rc0.fc30.x86_64 1/18 Running scriptlet: glusterfs-7.0-0.1rc0.fc30.x86_64 2/18 Upgrading : glusterfs-7.0-0.1rc0.fc30.x86_64 2/18 Running scriptlet: glusterfs-7.0-0.1rc0.fc30.x86_64 2/18 Upgrading : glusterfs-client-xlators-7.0-0.1rc0.fc30.x86_64 3/18 Upgrading : glusterfs-api-7.0-0.1rc0.fc30.x86_64 4/18 Upgrading : glusterfs-fuse-7.0-0.1rc0.fc30.x86_64 5/18 Upgrading : glusterfs-cli-7.0-0.1rc0.fc30.x86_64 6/18 Upgrading : glusterfs-server-7.0-0.1rc0.fc30.x86_64 7/18 Running scriptlet: glusterfs-server-7.0-0.1rc0.fc30.x86_64 7/18 Upgrading : python3-gluster-7.0-0.1rc0.fc30.x86_64 8/18 Upgrading : glusterfs-geo-replication-7.0-0.1rc0.fc30.x86_64 9/18 Running scriptlet: glusterfs-geo-replication-7.0-0.1rc0.fc30.x86_64 9/18 Cleanup : glusterfs-geo-replication-6.5-0.1.git988a3dcea.fc30.x86_64 10/18 Cleanup : python3-gluster-6.5-0.1.git988a3dcea.fc30.x86_64 11/18 Running scriptlet: glusterfs-server-6.5-0.1.git988a3dcea.fc30.x86_64 12/18 Cleanup : glusterfs-server-6.5-0.1.git988a3dcea.fc30.x86_64 12/18 Running scriptlet: glusterfs-server-6.5-0.1.git988a3dcea.fc30.x86_64 12/18 Cleanup : glusterfs-api-6.5-0.1.git988a3dcea.fc30.x86_64 . . . Verifying : python3-gluster-7.0-0.1rc0.fc30.x86_64 15/18 Verifying : python3-gluster-6.5-0.1.git988a3dcea.fc30.x86_64 16/18 Verifying : glusterfs-geo-replication-7.0-0.1rc0.fc30.x86_64 17/18 Verifying : glusterfs-geo-replication-6.5-0.1.git988a3dcea.fc30.x86_64 18/18 Upgraded: glusterfs-cli-7.0-0.1rc0.fc30.x86_64 glusterfs-api-7.0-0.1rc0.fc30.x86_64 glusterfs-libs-7.0-0.1rc0.fc30.x86_64 glusterfs-client-xlators-7.0-0.1rc0.fc30.x86_64 glusterfs-fuse-7.0-0.1rc0.fc30.x86_64 glusterfs-7.0-0.1rc0.fc30.x86_64 glusterfs-server-7.0-0.1rc0.fc30.x86_64 python3-gluster-7.0-0.1rc0.fc30.x86_64 glusterfs-geo-replication-7.0-0.1rc0.fc30.x86_64 Complete! Regards Rinku On Mon, Aug 26, 2019 at 1:43 PM Atin Mukherjee wrote: > > > On Mon, Aug 26, 2019 at 11:18 AM Rinku Kothiya > wrote: > >> Hi, >> >> Release-7 RC0 packages are built. This is a good time to start testing >> the release bits, and reporting any issues on bugzilla. >> Do post on the lists any testing done and feedback for the same. >> >> We have about 2 weeks to GA of release-6 barring any major blockers >> uncovered during the test phase. Please take this time to help make the >> release effective, by testing the same. >> > > I believe you meant release-7 here :-) > > I'd like to request that just like release-6, we pay some attention on the > upgrade testing (release-4/release-5/release-6 to release-7) paths and > report back issues here (along with bugzilla links). > > >> Packages for Fedora 29, Fedora 30, RHEL 8 are at >> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc0/ >> >> Packages are signed. The public key is at >> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub >> >> Packages for Stretch,Bullseye and CentOS7 will be there as soon as they >> get built. >> >> Regards >> Rinku >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > On Mon, Aug 26, 2019 at 1:43 PM Atin Mukherjee wrote: > > > On Mon, Aug 26, 2019 at 11:18 AM Rinku Kothiya > wrote: > >> Hi, >> >> Release-7 RC0 packages are built. This is a good time to start testing >> the release bits, and reporting any issues on bugzilla. >> Do post on the lists any testing done and feedback for the same. >> >> We have about 2 weeks to GA of release-6 barring any major blockers >> uncovered during the test phase. Please take this time to help make the >> release effective, by testing the same. >> > > I believe you meant release-7 here :-) > > I'd like to request that just like release-6, we pay some attention on the > upgrade testing (release-4/release-5/release-6 to release-7) paths and > report back issues here (along with bugzilla links). > > >> Packages for Fedora 29, Fedora 30, RHEL 8 are at >> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc0/ >> >> Packages are signed. The public key is at >> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub >> >> Packages for Stretch,Bullseye and CentOS7 will be there as soon as they >> get built. >> >> Regards >> Rinku >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at julianfamily.org Mon Aug 26 14:06:48 2019 From: joe at julianfamily.org (Joe Julian) Date: Mon, 26 Aug 2019 07:06:48 -0700 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: <20190826085626.GA28580@ndevos-x270> References: <20190826085626.GA28580@ndevos-x270> Message-ID: <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> > Comparing the changes between revisions is something that GitHub does not support... It does support that, actually. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avishwan at redhat.com Mon Aug 26 15:06:30 2019 From: avishwan at redhat.com (Aravinda Vishwanathapura Krishna Murthy) Date: Mon, 26 Aug 2019 20:36:30 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> Message-ID: On Mon, Aug 26, 2019 at 7:49 PM Joe Julian wrote: > > Comparing the changes between revisions is something > that GitHub does not support... > > It does support that, > actually._______________________________________________ > Yes, it does support. We need to use Squash merge after all review is done. A sample pull request is here to see reviews with multiple revisions. https://github.com/aravindavk/reviewdemo/pull/1 > maintainers mailing list > maintainers at gluster.org > https://lists.gluster.org/mailman/listinfo/maintainers > -- regards Aravinda VK -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at julianfamily.org Mon Aug 26 15:08:36 2019 From: joe at julianfamily.org (Joe Julian) Date: Mon, 26 Aug 2019 08:08:36 -0700 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> Message-ID: You can also see diffs between force pushes now. On August 26, 2019 8:06:30 AM PDT, Aravinda Vishwanathapura Krishna Murthy wrote: >On Mon, Aug 26, 2019 at 7:49 PM Joe Julian >wrote: > >> > Comparing the changes between revisions is something >> that GitHub does not support... >> >> It does support that, >> actually._______________________________________________ >> > >Yes, it does support. We need to use Squash merge after all review is >done. >A sample pull request is here to see reviews with multiple revisions. > >https://github.com/aravindavk/reviewdemo/pull/1 > > > > >> maintainers mailing list >> maintainers at gluster.org >> https://lists.gluster.org/mailman/listinfo/maintainers >> > > >-- >regards >Aravinda VK -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From avishwan at redhat.com Mon Aug 26 16:51:33 2019 From: avishwan at redhat.com (Aravinda Vishwanathapura Krishna Murthy) Date: Mon, 26 Aug 2019 22:21:33 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> Message-ID: On Mon, Aug 26, 2019 at 8:44 PM Joe Julian wrote: > You can also see diffs between force pushes now. > Nice. > On August 26, 2019 8:06:30 AM PDT, Aravinda Vishwanathapura Krishna Murthy > wrote: >> >> >> >> On Mon, Aug 26, 2019 at 7:49 PM Joe Julian wrote: >> >>> > Comparing the changes between revisions is something >>> that GitHub does not support... >>> >>> It does support that, >>> actually._______________________________________________ >>> >> >> Yes, it does support. We need to use Squash merge after all review is >> done. >> A sample pull request is here to see reviews with multiple revisions. >> >> https://github.com/aravindavk/reviewdemo/pull/1 >> >> >> >> >>> maintainers mailing list >>> maintainers at gluster.org >>> https://lists.gluster.org/mailman/listinfo/maintainers >>> >> >> > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > -- regards Aravinda VK -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Mon Aug 26 18:40:23 2019 From: ndevos at redhat.com (Niels de Vos) Date: Mon, 26 Aug 2019 20:40:23 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> Message-ID: <20190826184023.GE28580@ndevos-x270> On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna Murthy wrote: > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian wrote: > > > > Comparing the changes between revisions is something > > that GitHub does not support... > > > > It does support that, > > actually._______________________________________________ > > > > Yes, it does support. We need to use Squash merge after all review is done. Squash merge would also combine multiple commits that are intended to stay separate. This is really bad :-( Niels From ndevos at redhat.com Mon Aug 26 18:41:38 2019 From: ndevos at redhat.com (Niels de Vos) Date: Mon, 26 Aug 2019 20:41:38 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> Message-ID: <20190826184138.GF28580@ndevos-x270> On Mon, Aug 26, 2019 at 08:08:36AM -0700, Joe Julian wrote: > You can also see diffs between force pushes now. That is great! It is the feature that I was looking for. I have not noticed it yet, will pay attention to it while working on other projects. Thanks, Niels From rabhat at redhat.com Mon Aug 26 19:39:04 2019 From: rabhat at redhat.com (FNU Raghavendra Manjunath) Date: Mon, 26 Aug 2019 15:39:04 -0400 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: <20190826184138.GF28580@ndevos-x270> References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> <20190826184138.GF28580@ndevos-x270> Message-ID: +1 to the idea. On Mon, Aug 26, 2019 at 2:41 PM Niels de Vos wrote: > On Mon, Aug 26, 2019 at 08:08:36AM -0700, Joe Julian wrote: > > You can also see diffs between force pushes now. > > That is great! It is the feature that I was looking for. I have not > noticed it yet, will pay attention to it while working on other > projects. > > Thanks, > Niels > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atumball at redhat.com Tue Aug 27 01:27:14 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Tue, 27 Aug 2019 06:57:14 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: <20190826184023.GE28580@ndevos-x270> References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> <20190826184023.GE28580@ndevos-x270> Message-ID: On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos wrote: > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna > Murthy wrote: > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian wrote: > > > > > > Comparing the changes between revisions is something > > > that GitHub does not support... > > > > > > It does support that, > > > actually._______________________________________________ > > > > > > > Yes, it does support. We need to use Squash merge after all review is > done. > > Squash merge would also combine multiple commits that are intended to > stay separate. This is really bad :-( > > We should treat 1 patch in gerrit as 1 PR in github, then squash merge works same as how reviews in gerrit are done. Or we can come up with label, upon which we can actually do 'rebase and merge' option, which can preserve the commits as is. -Amar > Niels > _______________________________________________ > maintainers mailing list > maintainers at gluster.org > https://lists.gluster.org/mailman/listinfo/maintainers > -- Amar Tumballi (amarts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Tue Aug 27 02:33:28 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Tue, 27 Aug 2019 08:03:28 +0530 Subject: [Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4710 In-Reply-To: <752238962.92.1566842514995.JavaMail.jenkins@jenkins-el7.rht.gluster.org> References: <1957465285.88.1566762495598.JavaMail.jenkins@jenkins-el7.rht.gluster.org> <752238962.92.1566842514995.JavaMail.jenkins@jenkins-el7.rht.gluster.org> Message-ID: Since last few days I was trying to understand the nightly failures we were seeing even after addressing the port already in use issue. So here's the analysis: >From console output of https://build.gluster.org/job/regression-test-burn-in/4710/consoleFull *19:51:56* Started by upstream project "nightly-master " build number 843 *19:51:56* originally caused by:*19:51:56* Started by timer*19:51:56* Running as SYSTEM*19:51:57* Building remotely on builder209.aws.gluster.org (centos7) in workspace /home/jenkins/root/workspace/regression-test-burn-in*19:51:58* No credentials specified*19:51:58* > git rev-parse --is-inside-work-tree # timeout=10*19:51:58* Fetching changes from the remote Git repository*19:51:58* > git config remote.origin.url git://review.gluster.org/glusterfs.git # timeout=10*19:51:58* Fetching upstream changes from git://review.gluster.org/glusterfs.git*19:51:58* > git --version # timeout=10*19:51:58* > git fetch --tags --progress git://review.gluster.org/glusterfs.git refs/heads/master # timeout=10*19:52:01* > git rev-parse origin/master^{commit} # timeout=10*19:52:01* Checking out Revision a31fad885c30cbc1bea652349c7d52bac1414c08 (origin/master)*19:52:01* > git config core.sparsecheckout # timeout=10*19:52:01 > git checkout -f a31fad885c30cbc1bea652349c7d52bac1414c08 # timeout=10 19:52:02 Commit message: "tests: heal-info add --xml option for more coverage"**19:52:02* > git rev-list --no-walk a31fad885c30cbc1bea652349c7d52bac1414c08 # timeout=10*19:52:02* [regression-test-burn-in] $ /bin/bash /tmp/jenkins7274529097702336737.sh*19:52:02* Start time Mon Aug 26 14:22:02 UTC 2019 The latest commit which it picked up as part of git checkout is quite old and hence we continue to see the similar failures in the latest nightly runs which has been already addressed by commit c370c70 commit c370c70f77079339e2cfb7f284f3a2fb13fd2f97 Author: Mohit Agrawal Date: Tue Aug 13 18:45:43 2019 +0530 rpc: glusterd start is failed and throwing an error Address already in use Problem: Some of the .t are failed due to bind is throwing an error EADDRINUSE Solution: After killing all gluster processes .t is trying to start glusterd but somehow if kernel has not cleaned up resources(socket) then glusterd startup is failed due to bind system call failure.To avoid the issue retries to call bind 10 times to execute system call succesfully Change-Id: Ia5fd6b788f7b211c1508c1b7304fc08a32266629 Fixes: bz#1743020 Signed-off-by: Mohit Agrawal So the (puzzling) question is - why are we picking up old commit? In my local setup when I run the following command I do see the latest commit id being picked up: atin at dhcp35-96:~/codebase/upstream/glusterfs_master/glusterfs$ git rev-parse origin/master^{commit} # timeout=10 7926992e65d0a07fdc784a6e45740306d9b4a9f2 atin at dhcp35-96:~/codebase/upstream/glusterfs_master/glusterfs$ git show 7926992e65d0a07fdc784a6e45740306d9b4a9f2 commit 7926992e65d0a07fdc784a6e45740306d9b4a9f2 (origin/master, origin/HEAD, master) Author: Sanju Rakonde Date: Mon Aug 26 12:38:40 2019 +0530 glusterd: Unused value coverity fix CID: 1288765 updates: bz#789278 Change-Id: Ie6b01f81339769f44d82fd7c32ad0ed1a697c69c Signed-off-by: Sanju Rakonde ---------- Forwarded message --------- From: Date: Mon, Aug 26, 2019 at 11:32 PM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4710 To: See < https://build.gluster.org/job/regression-test-burn-in/4710/display/redirect> ------------------------------------------ [...truncated 4.18 MB...] ./tests/features/lock-migration/lkmigration-set-option.t - 7 second ./tests/bugs/upcall/bug-1458127.t - 7 second ./tests/bugs/transport/bug-873367.t - 7 second ./tests/bugs/snapshot/bug-1260848.t - 7 second ./tests/bugs/shard/shard-inode-refcount-test.t - 7 second ./tests/bugs/replicate/bug-986905.t - 7 second ./tests/bugs/replicate/bug-921231.t - 7 second ./tests/bugs/replicate/bug-1132102.t - 7 second ./tests/bugs/replicate/bug-1037501.t - 7 second ./tests/bugs/posix/bug-1175711.t - 7 second ./tests/bugs/posix/bug-1122028.t - 7 second ./tests/bugs/glusterfs/bug-861015-log.t - 7 second ./tests/bugs/fuse/bug-983477.t - 7 second ./tests/bugs/ec/bug-1227869.t - 7 second ./tests/bugs/distribute/bug-1086228.t - 7 second ./tests/bugs/cli/bug-1087487.t - 7 second ./tests/bitrot/br-stub.t - 7 second ./tests/basic/ctime/ctime-noatime.t - 7 second ./tests/basic/afr/ta-write-on-bad-brick.t - 7 second ./tests/basic/afr/ta.t - 7 second ./tests/basic/afr/ta-shd.t - 7 second ./tests/basic/afr/root-squash-self-heal.t - 7 second ./tests/basic/afr/granular-esh/add-brick.t - 7 second ./tests/bugs/upcall/bug-1369430.t - 6 second ./tests/bugs/snapshot/bug-1064768.t - 6 second ./tests/bugs/shard/bug-1258334.t - 6 second ./tests/bugs/replicate/bug-1250170-fsync.t - 6 second ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t - 6 second ./tests/bugs/quota/bug-1243798.t - 6 second ./tests/bugs/quota/bug-1104692.t - 6 second ./tests/bugs/protocol/bug-1321578.t - 6 second ./tests/bugs/nfs/bug-915280.t - 6 second ./tests/bugs/io-cache/bug-858242.t - 6 second ./tests/bugs/glusterfs-server/bug-877992.t - 6 second ./tests/bugs/glusterfs/bug-902610.t - 6 second ./tests/bugs/distribute/bug-884597.t - 6 second ./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t - 6 second ./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t - 6 second ./tests/bugs/bug-1702299.t - 6 second ./tests/bugs/bug-1371806_2.t - 6 second ./tests/bugs/bug-1258069.t - 6 second ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t - 6 second ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t - 6 second ./tests/basic/glusterd/thin-arbiter-volume-probe.t - 6 second ./tests/basic/gfapi/libgfapi-fini-hang.t - 6 second ./tests/basic/fencing/fencing-crash-conistency.t - 6 second ./tests/basic/ec/statedump.t - 6 second ./tests/basic/distribute/file-create.t - 6 second ./tests/basic/afr/tarissue.t - 6 second ./tests/basic/afr/gfid-heal.t - 6 second ./tests/basic/afr/afr-read-hash-mode.t - 6 second ./tests/basic/afr/add-brick-self-heal.t - 6 second ./tests/gfid2path/gfid2path_fuse.t - 5 second ./tests/bugs/shard/bug-1259651.t - 5 second ./tests/bugs/replicate/bug-767585-gfid.t - 5 second ./tests/bugs/replicate/bug-1686568-send-truncate-on-arbiter-from-shd.t - 5 second ./tests/bugs/replicate/bug-1626994-info-split-brain.t - 5 second ./tests/bugs/replicate/bug-1365455.t - 5 second ./tests/bugs/replicate/bug-1101647.t - 5 second ./tests/bugs/nfs/bug-877885.t - 5 second ./tests/bugs/nfs/bug-847622.t - 5 second ./tests/bugs/nfs/bug-1116503.t - 5 second ./tests/bugs/md-cache/setxattr-prepoststat.t - 5 second ./tests/bugs/md-cache/bug-1211863_unlink.t - 5 second ./tests/bugs/md-cache/afr-stale-read.t - 5 second ./tests/bugs/io-stats/bug-1598548.t - 5 second ./tests/bugs/glusterfs/bug-895235.t - 5 second ./tests/bugs/glusterfs/bug-856455.t - 5 second ./tests/bugs/glusterfs/bug-848251.t - 5 second ./tests/bugs/glusterd/quorum-value-check.t - 5 second ./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t - 5 second ./tests/bugs/ec/bug-1179050.t - 5 second ./tests/bugs/distribute/bug-912564.t - 5 second ./tests/bugs/distribute/bug-1368012.t - 5 second ./tests/bugs/core/bug-986429.t - 5 second ./tests/bugs/core/bug-908146.t - 5 second ./tests/bugs/bug-1371806_1.t - 5 second ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t - 5 second ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t - 5 second ./tests/basic/playground/template-xlator-sanity.t - 5 second ./tests/basic/hardlink-limit.t - 5 second ./tests/basic/glusterd/arbiter-volume-probe.t - 5 second ./tests/basic/ec/nfs.t - 5 second ./tests/basic/ec/ec-read-policy.t - 5 second ./tests/basic/ec/ec-anonymous-fd.t - 5 second ./tests/basic/afr/arbiter-remove-brick.t - 5 second ./tests/gfid2path/gfid2path_nfs.t - 4 second ./tests/gfid2path/get-gfid-to-path.t - 4 second ./tests/gfid2path/block-mount-access.t - 4 second ./tests/bugs/upcall/bug-upcall-stat.t - 4 second ./tests/bugs/trace/bug-797171.t - 4 second ./tests/bugs/snapshot/bug-1178079.t - 4 second ./tests/bugs/shard/bug-1342298.t - 4 second ./tests/bugs/shard/bug-1272986.t - 4 second ./tests/bugs/rpc/bug-954057.t - 4 second ./tests/bugs/replicate/bug-886998.t - 4 second ./tests/bugs/replicate/bug-1480525.t - 4 second ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t - 4 second ./tests/bugs/replicate/bug-1325792.t - 4 second ./tests/bugs/readdir-ahead/bug-1670253-consistent-metadata.t - 4 second ./tests/bugs/posix/bug-gfid-path.t - 4 second ./tests/bugs/posix/bug-765380.t - 4 second ./tests/bugs/posix/bug-1619720.t - 4 second ./tests/bugs/nfs/zero-atime.t - 4 second ./tests/bugs/nfs/subdir-trailing-slash.t - 4 second ./tests/bugs/nfs/socket-as-fifo.t - 4 second ./tests/bugs/nfs/showmount-many-clients.t - 4 second ./tests/bugs/nfs/bug-1161092-nfs-acls.t - 4 second ./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t - 4 second ./tests/bugs/glusterfs-server/bug-873549.t - 4 second ./tests/bugs/glusterfs-server/bug-864222.t - 4 second ./tests/bugs/glusterfs/bug-893378.t - 4 second ./tests/bugs/glusterd/bug-948729/bug-948729-force.t - 4 second ./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t - 4 second ./tests/bugs/glusterd/bug-1091935-brick-order-check-from-cli-to-glusterd.t - 4 second ./tests/bugs/geo-replication/bug-1296496.t - 4 second ./tests/bugs/ec/bug-1161621.t - 4 second ./tests/bugs/distribute/bug-1088231.t - 4 second ./tests/bugs/cli/bug-977246.t - 4 second ./tests/bugs/cli/bug-1004218.t - 4 second ./tests/bugs/bug-1138841.t - 4 second ./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t - 4 second ./tests/bugs/access-control/bug-1051896.t - 4 second ./tests/bitrot/bug-1221914.t - 4 second ./tests/basic/ec/ec-internal-xattrs.t - 4 second ./tests/basic/distribute/non-root-unlink-stale-linkto.t - 4 second ./tests/basic/distribute/bug-1265677-use-readdirp.t - 4 second ./tests/basic/changelog/changelog-rename.t - 4 second ./tests/basic/afr/ta-check-locks.t - 4 second ./tests/basic/afr/heal-info.t - 4 second ./tests/performance/quick-read.t - 3 second ./tests/line-coverage/meta-max-coverage.t - 3 second ./tests/bugs/upcall/bug-1422776.t - 3 second ./tests/bugs/upcall/bug-1394131.t - 3 second ./tests/bugs/unclassified/bug-1034085.t - 3 second ./tests/bugs/snapshot/bug-1111041.t - 3 second ./tests/bugs/shard/bug-1256580.t - 3 second ./tests/bugs/shard/bug-1250855.t - 3 second ./tests/bugs/replicate/bug-976800.t - 3 second ./tests/bugs/replicate/bug-880898.t - 3 second ./tests/bugs/read-only/bug-1134822-read-only-default-in-graph.t - 3 second ./tests/bugs/readdir-ahead/bug-1446516.t - 3 second ./tests/bugs/readdir-ahead/bug-1439640.t - 3 second ./tests/bugs/readdir-ahead/bug-1390050.t - 3 second ./tests/bugs/quota/bug-1287996.t - 3 second ./tests/bugs/quick-read/bug-846240.t - 3 second ./tests/bugs/nl-cache/bug-1451588.t - 3 second ./tests/bugs/nfs/bug-1210338.t - 3 second ./tests/bugs/nfs/bug-1166862.t - 3 second ./tests/bugs/md-cache/bug-1632503.t - 3 second ./tests/bugs/md-cache/bug-1476324.t - 3 second ./tests/bugs/glusterfs-server/bug-861542.t - 3 second ./tests/bugs/glusterfs/bug-869724.t - 3 second ./tests/bugs/glusterfs/bug-844688.t - 3 second ./tests/bugs/glusterfs/bug-1482528.t - 3 second ./tests/bugs/glusterd/bug-948729/bug-948729.t - 3 second ./tests/bugs/glusterd/bug-948729/bug-948729-mode-script.t - 3 second ./tests/bugs/fuse/bug-1336818.t - 3 second ./tests/bugs/fuse/bug-1126048.t - 3 second ./tests/bugs/distribute/bug-907072.t - 3 second ./tests/bugs/core/log-bug-1362520.t - 3 second ./tests/bugs/core/io-stats-1322825.t - 3 second ./tests/bugs/core/bug-913544.t - 3 second ./tests/bugs/core/bug-845213.t - 3 second ./tests/bugs/core/bug-834465.t - 3 second ./tests/bugs/core/bug-1421721-mpx-toggle.t - 3 second ./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t - 3 second ./tests/bugs/core/bug-1117951.t - 3 second ./tests/bugs/core/949327.t - 3 second ./tests/bugs/cli/bug-983317-volume-get.t - 3 second ./tests/bugs/cli/bug-961307.t - 3 second ./tests/bugs/access-control/bug-1387241.t - 3 second ./tests/bitrot/bug-internal-xattrs-check-1243391.t - 3 second ./tests/basic/quota-rename.t - 3 second ./tests/basic/glusterd/check-cloudsync-ancestry.t - 3 second ./tests/basic/fops-sanity.t - 3 second ./tests/basic/fencing/test-fence-option.t - 3 second ./tests/basic/ec/ec-fallocate.t - 3 second ./tests/basic/ec/dht-rename.t - 3 second ./tests/basic/distribute/lookup.t - 3 second ./tests/basic/distribute/debug-xattrs.t - 3 second ./tests/line-coverage/some-features-in-libglusterfs.t - 2 second ./tests/bugs/unclassified/bug-991622.t - 2 second ./tests/bugs/shard/bug-1245547.t - 2 second ./tests/bugs/replicate/bug-884328.t - 2 second ./tests/bugs/readdir-ahead/bug-1512437.t - 2 second ./tests/bugs/posix/disallow-gfid-volumeid-removexattr.t - 2 second ./tests/bugs/nfs/bug-970070.t - 2 second ./tests/bugs/nfs/bug-1302948.t - 2 second ./tests/bugs/logging/bug-823081.t - 2 second ./tests/bugs/glusterfs-server/bug-889996.t - 2 second ./tests/bugs/glusterfs/bug-860297.t - 2 second ./tests/bugs/glusterfs/bug-811493.t - 2 second ./tests/bugs/glusterd/bug-1085330-and-bug-916549.t - 2 second ./tests/bugs/fuse/bug-1283103.t - 2 second ./tests/bugs/distribute/bug-924265.t - 2 second ./tests/bugs/distribute/bug-1204140.t - 2 second ./tests/bugs/core/bug-924075.t - 2 second ./tests/bugs/core/bug-903336.t - 2 second ./tests/bugs/core/bug-1119582.t - 2 second ./tests/bugs/core/bug-1111557.t - 2 second ./tests/bugs/cli/bug-969193.t - 2 second ./tests/bugs/cli/bug-949298.t - 2 second ./tests/bugs/cli/bug-1378842-volume-get-all.t - 2 second ./tests/basic/md-cache/bug-1418249.t - 2 second ./tests/basic/afr/arbiter-cli.t - 2 second ./tests/line-coverage/volfile-with-all-graph-syntax.t - 1 second ./tests/bugs/shard/bug-1261773.t - 1 second ./tests/bugs/replicate/ta-inode-refresh-read.t - 1 second ./tests/bugs/glusterfs/bug-892730.t - 1 second ./tests/bugs/glusterfs/bug-853690.t - 1 second ./tests/bugs/cli/bug-921215.t - 1 second ./tests/bugs/cli/bug-867252.t - 1 second ./tests/bugs/cli/bug-764638.t - 1 second ./tests/bugs/cli/bug-1047378.t - 1 second ./tests/basic/posixonly.t - 1 second ./tests/basic/peer-parsing.t - 1 second ./tests/basic/netgroup_parsing.t - 1 second ./tests/basic/gfapi/sink.t - 1 second ./tests/basic/exports_parsing.t - 1 second ./tests/basic/glusterfsd-args.t - 0 second 4 test(s) failed ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t ./tests/bugs/glusterd/brick-mux-validation.t ./tests/bugs/glusterd/bug-1595320.t 0 test(s) generated core 10 test(s) needed retry ./tests/bugs/core/bug-1119582.t ./tests/bugs/core/multiplex-limit-issue-151.t ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t ./tests/bugs/glusterd/brick-mux-validation.t ./tests/bugs/glusterd/bug-1595320.t ./tests/bugs/glusterd/bug-1696046.t ./tests/bugs/glusterd/optimized-basic-testcases.t ./tests/bugs/replicate/bug-1134691-afr-lookup-metadata-heal.t ./tests/bugs/replicate/bug-976800.t ./tests/bugs/snapshot/bug-1111041.t Result is 1 tar: Removing leading `/' from member names kernel.core_pattern = /%e-%p.core Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list maintainers at gluster.org https://lists.gluster.org/mailman/listinfo/maintainers -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacharya at redhat.com Tue Aug 27 04:45:35 2019 From: sacharya at redhat.com (sacharya at redhat.com) Date: Tue, 27 Aug 2019 04:45:35 +0000 Subject: [Gluster-devel] Invitation: Gluster Community Meeting @ Tue Aug 27, 2019 11:30am - 12:30pm (IST) (gluster-devel@gluster.org) Message-ID: <00000000000076f7d6059111f14d@google.com> You have been invited to the following event. Title: Gluster Community Meeting Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/IGIuc8GxRv6JAUpRjWZ5sw Previous Meeting notes: https://github.com/gluster/community/meetings Flash talk: No Flash Talk When: Tue Aug 27, 2019 11:30am ? 12:30pm India Standard Time - Kolkata Where: https://bluejeans.com/836554017 Calendar: gluster-devel at gluster.org Who: * sacharya at redhat.com - organizer * gluster-users at gluster.org * gluster-devel at gluster.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=MDVvdTVtb3Q3MDkzOXV0aHNiYmN2NTF1cWsgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjc2FjaGFyeWFAcmVkaGF0LmNvbWI1NGNlOThmYjVhYTBlYWNkN2UyZGQ4MjU5NWUzZGU3ZjEyOTI2Yzc&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1750 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 1788 bytes Desc: not available URL: From sacharya at redhat.com Tue Aug 27 09:01:08 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Tue, 27 Aug 2019 14:31:08 +0530 Subject: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting (APAC) 27th August 2019 Message-ID: Hi, The minutes of the meeting are as follows: # Gluster Community Meeting - 27th Aug 2019 ### Previous Meeting minutes: - http://github.com/gluster/community - Recording of this meeting- - https://bluejeans.com/s/s1Zma ### Date/Time: Check the [community calendar]( https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ ) ### Bridge * APAC friendly hours - Tuesday 27th August 2019, 11:30AM IST - Bridge: https://bluejeans.com/836554017 * NA/EMEA - Every 1st and 3rd Tuesday at 01:00 PM EDT - Bridge: https://bluejeans.com/486278655 ------- ### Attendance Name (#gluster-dev alias) - company * Hari Gowtham (hgowtham) - Red Hat * Ravishankar (itisravi) Red Hat * Sheetal Pamecha (spamecha) - Red Hat * David Spisla - Gluster User * Sunny Kumar (sunny) - Red hat * Rinku Kothiya (rkothiya) - Red Hat * Ashish Pandey (_apandey) Red Hat * Sunil Kumar Acharya - Red Hat * Arjun Sharma - Red Hat * Sanju Rakonde(srakonde) - Red Hat * Kotresh (kotreshhr) - Redhat * Karthik Subrahmanya (ksubrahm) - Red Hat ### User stories * Ravi - timeout of self heal crawl to be automatically updated. (Bug ID: 1743988) * Hari - user asked for project: user quota. Had to reply that we are running out of bandwidth. contribution will be helpful here. ### Community * Project metrics: Metrics | Value | |[Coverity] | 65 | |[Clang Scan] | 59 | |[Test coverage] | 70.8% | |[New Bugs in last 14 days] | 6 | [[7.x] | 2 | [[6.x] | 10 | [[ 5.x] | 1 | |[Gluster User Queries in last 14 days] | 232 | |[Total Bugs] | 345 | |[Total Github issues](https://github.com/gluster/glusterfs/issues) | 393 | * Any release updates? Rinku - We have released Release-7 rc0 on 26-August-2019, request users to report any problems seen. * Blocker issues across the project? Atin - infra issue related to a bunch of tests are failing. A fix to do retry was done. Nightly is running an old code base. * Notable thread from mailing list Amar - Moving to Github from gerrit. Atin - It will be good to move to github completely. We can discuss it with large audiance. ### Conferences / Meetups * [Developers' Conference - {Date}]({Link}) - No conferences to talk about this week. Important dates: CFP Closed Schedule Announcement: Event Open for Registration : Last Date of Registration: Event dates: Venue: Talks related to gluster: ### GlusterFS - v7.0 and beyond * Proposal - https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing * Proposed Plan: - GlusterFS-7.0 (July 1st) - Stability, Automation - Only - GlusterFS-8.0 (Nov 1st) - - Plan for Fedora 31/RHEL8.2 - GlusterFS-9.0 (March 1st, 2020) Reflink, io_uring, and similar improvements. ### Developer focus * Any design specs to discuss? nil ### Component status * Arbiter - no new updates. * AFR - nil * DHT - nil * EC - new data corruption corner cases. Pranith has found the code path. * FUSE - nil * POSIX - nil * DOC - changes related to afr was posted by Ravi and Karthik * Geo Replication - nil * libglusterfs - nil * Glusterd - Glusto automation run is blocked because of https://bugzilla.redhat.com/show_bug.cgi?id=1744420 , team is working on it. * Snapshot - nil * NFS - nil * thin-arbiter - nil ### Flash Talk Gluster * Typical 5 min talk about Gluster with up to 5 more minutes for questions ### Recent Blog posts / Document updates * https://shwetha174.blogspot.com/search/label/Gluster * https://medium.com/@ntkumar/running-alluxio-on-hashicorp-nomad-ef78130727ef * https://medium.com/@tumballi/kadalu-ocean-of-potential-in-k8s-storage-a07be1b8b961 ### Gluster Friday Five * Every friday we release this, which basically covers highlight of week in gluster.Also you can find more videos in youtube link. https://www.youtube.com/channel/UCfilWh0JA5NfCjbqq1vsBVA ### Host Sheetal will host next meeting. * Who will host next meeting? - Host will need to send out the agenda 24hr - 12hrs in advance to mailing list, and also make sure to send the meeting minutes. - Host will need to reach out to one user at least who can talk about their usecase, their experience, and their needs. - Host needs to send meeting minutes as PR to http://github.com/gluster/community - Host has to send the meeting minutes as a mail to the gluster-devel and gluster-users list. ### Notetaker * Who will take notes from the next meeting? ### RoundTable * ### Action Items on host * Check-in Minutes of meeting for this meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Wed Aug 28 02:57:55 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 28 Aug 2019 08:27:55 +0530 Subject: [Gluster-devel] Upstream nightly build on Centos is failing with glusterd crash In-Reply-To: References: Message-ID: This issue is fixed now. Thanks to Nithya for root causing and fixing it. On Fri, Aug 23, 2019 at 11:19 AM Bala Konda Reddy Mekala wrote: > Hi, > On fresh installation with the nightly build[1], "systemctl glusterd > start" is crashing with a glusterd crash (coredump). Bug was filed[2] and > centos-ci for glusto-tests is currently blocked because of the bug. Please > look into it. > > Thanks, > Bala > > [1] http://artifacts.ci.centos.org/gluster/nightly/master/7/x86_64/ > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1744420 > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Wed Aug 28 07:08:09 2019 From: ndevos at redhat.com (Niels de Vos) Date: Wed, 28 Aug 2019 09:08:09 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> <20190826184023.GE28580@ndevos-x270> Message-ID: <20190828070809.GB27346@ndevos-x270> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan wrote: > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos wrote: > > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna > > Murthy wrote: > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian wrote: > > > > > > > > Comparing the changes between revisions is something > > > > that GitHub does not support... > > > > > > > > It does support that, > > > > actually._______________________________________________ > > > > > > > > > > Yes, it does support. We need to use Squash merge after all review is > > done. > > > > Squash merge would also combine multiple commits that are intended to > > stay separate. This is really bad :-( > > > > > We should treat 1 patch in gerrit as 1 PR in github, then squash merge > works same as how reviews in gerrit are done. Or we can come up with > label, upon which we can actually do 'rebase and merge' option, which can > preserve the commits as is. Something like that would be good. For many things, including commit message update squashing patches is just loosing details. We dont do that with Gerrit now, and we should not do that when using GitHub PRs. Proper documenting changes is still very important to me, the details of patches should be explained in commit messages. This only works well when developers 'force push' to the branch holding the PR. Niels From sunkumar at redhat.com Wed Aug 28 12:51:27 2019 From: sunkumar at redhat.com (Sunny Kumar) Date: Wed, 28 Aug 2019 18:21:27 +0530 Subject: [Gluster-devel] Gluster meetup: India Message-ID: Hello folks, We are hosting Gluster meetup at our office (Redhat-BLR-IN) on 25th September 2019. Please find the agenda and location detail here [1] and plan accordingly. The highlight of this event will be Gluster -X we will keep on updating agenda with topics, so keep an eye on it. Note: * RSVP as YES if attending, this will help us to organize the facilities better. If you have any question, please reach out to me or comment on the event page [1]. Feel free to share this meetup via other channels. [1]. https://www.meetup.com/glusterfs-India/events/264366771/ /sunny