From jenkins at build.gluster.org Mon Sep 2 01:45:02 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 2 Sep 2019 01:45:02 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1587462271.4.1567388703372.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1743195 / core: can't start gluster after upgrade from 5 to 6 https://bugzilla.redhat.com/1738878 / core: FUSE client's memory leak https://bugzilla.redhat.com/1744883 / core: GlusterFS problem dataloss https://bugzilla.redhat.com/1746810 / doc: markdown files containing 404 links https://bugzilla.redhat.com/1745026 / fuse: endless heal gluster volume; incrementing number of files to heal when all peers in volume are up https://bugzilla.redhat.com/1746140 / geo-replication: geo-rep: Changelog archive file format is incorrect https://bugzilla.redhat.com/1743215 / glusterd: glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied] https://bugzilla.redhat.com/1746615 / glusterd: SSL Volumes Fail Intermittently in 6.5 https://bugzilla.redhat.com/1739320 / glusterd: The result (hostname) of getnameinfo for all bricks (ipv6 addresses) are the same, while they are not. https://bugzilla.redhat.com/1747414 / libglusterfsclient: EIO error on check_and_dump_fuse_W call https://bugzilla.redhat.com/1741402 / posix-acl: READDIRP incorrectly updates posix-acl inode ctx https://bugzilla.redhat.com/1744671 / project-infrastructure: Smoke is failing for the changeset https://bugzilla.redhat.com/1738778 / project-infrastructure: Unable to setup softserve VM https://bugzilla.redhat.com/1741899 / replicate: the volume of occupied space in the bricks of gluster volume (3 nodes replica) differs on nodes and the healing does not fix it https://bugzilla.redhat.com/1745916 / rpc: glusterfs client process memory leak after enable tls on community version 6.5 https://bugzilla.redhat.com/1740413 / rpc: Gluster volume bricks crashes when running a security scan on glusterfs ports https://bugzilla.redhat.com/1739884 / transport: glusterfsd process crashes with SIGSEGV [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2227 bytes Desc: not available URL: From sacharya at redhat.com Tue Sep 3 10:15:22 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Tue, 3 Sep 2019 15:45:22 +0530 Subject: [Gluster-devel] [Gluster-users] geo-replication won't start In-Reply-To: <143d8bdc905173dd3743f45e67ebf8ee@li.nux.ro> References: <143d8bdc905173dd3743f45e67ebf8ee@li.nux.ro> Message-ID: Hi Lucian, Slave must be a gluster volume. Data from master volume gets replicated into the slave volume after creation of the geo-rep session. You can try creating the session again using the steps mentioned in this link https://docs.gluster.org/en/latest/Administrator%20Guide/Geo %20Replication/#creating-the-session. Regards, Shwetha On Thu, Aug 22, 2019 at 9:51 PM Nux! wrote: > Hi, > > I'm trying for the first time ever the geo-replication feature and I am > not having much success (CentOS7, gluster 6.5). > First of all, from the docs I get the impression that I can > geo-replicate over ssh to a simple dir, but it doesn't seem to be the > case, the "slave" must be a gluster volume, doesn't it? > > Second, the slave host is not in the subnet with the other gluster > peers, but I reckon this would be the usual case and not a problem. > > I've stopped the firewall on all peers and slave host to rule it out, > but I can't get the georep started. > > Creation is successfull, however STATUS won't change from Created. > I'm looking through all the logs and I can't see anything meaningful. > > What steps could I take to debug this further? > > Cheers, > Lucian > > > -- > Sent from the Delta quadrant using Borg technology! > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsasonro at redhat.com Wed Sep 4 09:00:43 2019 From: bsasonro at redhat.com (Barak Sason Rofman) Date: Wed, 4 Sep 2019 12:00:43 +0300 Subject: [Gluster-devel] How does Gluster works - Locating files after change sin cluster Message-ID: Hello everyone, I'm about to post several threads with question regarding how Gluster handles different scenarios. I'm looking for answers on architecture/design/"the is the idea" level, and not specifically implementation (however, it would be nice to know where the relevant code is). In this thread I want to focus on the "adding servers/bricks" scenario. >From what I know at this point, every file that's created is given a 32-bit value based on it's name, and this hashing function is fixed and independent of any factors. Next, there is a function (a routing method), located on the client side, that *is* dependent on outside factors, such as numbers of servers (or bricks) in the system which determines on which server a particular file is located. Let's examine the following case: Assume (for simplicity's sake) that the hashing function assign values to file in 1-100 range (instead of 32-bit) and currently there are 4 servers in the cluster. In this case, files 1-25 would be located on server 1, 26-50 on server 2 and so on. Now, if a 5th server is added to the cluster, then the ranges will change: files 1-20 will be located on server 1, 21-40 on server 2 and so on. The questions regarding this scenarios are as follows: 1 - Does the servers update the clients that an additional server (or brick) has been added to the cluster? If not, how does this happen? 2 - Does the server also know which files *should* be located on them? if so, does the servers create a link file (which specifies the "real" location of the file) for the files that are supposed to be moved (e.g. files 21-25) or actually move the data right away? Maybe this works in a completely different manner? I have additional questions regarding this, but they are dependent om the answers to these question. Thank you all for your help. -- *Barak Sason Rofman* Gluster Storage Development Red Hat Israel 34 Jerusalem rd. Ra'anana, 43501 bsasonro at redhat.com T: *+972-9-7692304* M: *+972-52-4326355* -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarts at gmail.com Thu Sep 5 11:43:22 2019 From: amarts at gmail.com (Amar Tumballi) Date: Thu, 5 Sep 2019 17:13:22 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely In-Reply-To: <20190828070809.GB27346@ndevos-x270> References: <20190826085626.GA28580@ndevos-x270> <1C61604E-1B6E-41D8-887C-4A5A995241E1@julianfamily.org> <20190826184023.GE28580@ndevos-x270> <20190828070809.GB27346@ndevos-x270> Message-ID: Going through the thread, I see in general positive responses for the same, with few points on review system, and not loosing information when merging the patches. While we are working on that, we need to see and understand how our CI/CD looks like with github migration. We surely need suggestion and volunteers here to get this going. Regards, Amar On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos wrote: > On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan wrote: > > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos wrote: > > > > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura > Krishna > > > Murthy wrote: > > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian > wrote: > > > > > > > > > > Comparing the changes between revisions is something > > > > > that GitHub does not support... > > > > > > > > > > It does support that, > > > > > actually._______________________________________________ > > > > > > > > > > > > > Yes, it does support. We need to use Squash merge after all review is > > > done. > > > > > > Squash merge would also combine multiple commits that are intended to > > > stay separate. This is really bad :-( > > > > > > > > We should treat 1 patch in gerrit as 1 PR in github, then squash merge > > works same as how reviews in gerrit are done. Or we can come up with > > label, upon which we can actually do 'rebase and merge' option, which can > > preserve the commits as is. > > Something like that would be good. For many things, including commit > message update squashing patches is just loosing details. We dont do > that with Gerrit now, and we should not do that when using GitHub PRs. > Proper documenting changes is still very important to me, the details of > patches should be explained in commit messages. This only works well > when developers 'force push' to the branch holding the PR. > > Niels > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtalur at redhat.com Thu Sep 5 13:02:31 2019 From: rtalur at redhat.com (Raghavendra Talur) Date: Thu, 5 Sep 2019 09:02:31 -0400 Subject: [Gluster-devel] How does Gluster works - Locating files after change sin cluster In-Reply-To: References: Message-ID: On Wed, Sep 4, 2019 at 5:01 AM Barak Sason Rofman wrote: > Hello everyone, > > I'm about to post several threads with question regarding how Gluster > handles different scenarios. > I'm looking for answers on architecture/design/"the is the idea" level, > and not specifically implementation (however, it would be nice to know > where the relevant code is). > > In this thread I want to focus on the "adding servers/bricks" scenario. > From what I know at this point, every file that's created is given a > 32-bit value based on it's name, and this hashing function is fixed and > independent of any factors. > Next, there is a function (a routing method), located on the client side, > that *is* dependent on outside factors, such as numbers of servers (or > bricks) in the system which determines on which server a particular file is > located. > > Let's examine the following case: > Assume (for simplicity's sake) that the hashing function assign values to > file in 1-100 range (instead of 32-bit) and currently there are 4 servers > in the cluster. > In this case, files 1-25 would be located on server 1, 26-50 on server 2 > and so on. > Now, if a 5th server is added to the cluster, then the ranges will change: > files 1-20 will be located on server 1, 21-40 on server 2 and so on. > > The questions regarding this scenarios are as follows: > 1 - Does the servers update the clients that an additional server (or > brick) has been added to the cluster? If not, how does this happen? > Yes, addition of a brick happens through a gluster cli command that updates the volume info in glusterd. Glusterd(the one which updated config and other peers) update clients about this change. 2 - Does the server also know which files *should* be located on them? if > so, does the servers create a link file (which specifies the "real" > location of the file) for the files that are supposed to be moved (e.g. > files 21-25) or actually move the data right away? Maybe this works in a > completely different manner? > The addition of a brick has a step for updating the xattrs on the bricks which marks the range for them. The creation of link files happens lazily. Clients look up on all bricks when they don't find the file on the brick where it is supposed to be(called hashed brick), the brick where they find the file is called cached brick and a link file is created. For more information on distribute mechanism refer to https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#dhtdistributed-hash-table-translator For more information on how clients get update from glusterd refer to https://www.youtube.com/watch?v=Gq-yBYq8Gjg > I have additional questions regarding this, but they are dependent om the > answers to these question. > > Thank you all for your help. > -- > *Barak Sason Rofman* > > Gluster Storage Development > > Red Hat Israel > > 34 Jerusalem rd. Ra'anana, 43501 > > bsasonro at redhat.com T: *+972-9-7692304* > M: *+972-52-4326355* > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nbalacha at redhat.com Fri Sep 6 03:55:39 2019 From: nbalacha at redhat.com (Nithya Balachandran) Date: Fri, 6 Sep 2019 09:25:39 +0530 Subject: [Gluster-devel] How does Gluster works - Locating files after change sin cluster In-Reply-To: References: Message-ID: On Thu, 5 Sep 2019 at 18:33, Raghavendra Talur wrote: > > > On Wed, Sep 4, 2019 at 5:01 AM Barak Sason Rofman > wrote: > >> Hello everyone, >> >> I'm about to post several threads with question regarding how Gluster >> handles different scenarios. >> I'm looking for answers on architecture/design/"the is the idea" level, >> and not specifically implementation (however, it would be nice to know >> where the relevant code is). >> >> In this thread I want to focus on the "adding servers/bricks" scenario. >> From what I know at this point, every file that's created is given a >> 32-bit value based on it's name, and this hashing function is fixed and >> independent of any factors. >> Next, there is a function (a routing method), located on the client side, >> that *is* dependent on outside factors, such as numbers of servers (or >> bricks) in the system which determines on which server a particular file is >> located. >> >> Let's examine the following case: >> Assume (for simplicity's sake) that the hashing function assign values to >> file in 1-100 range (instead of 32-bit) and currently there are 4 servers >> in the cluster. >> In this case, files 1-25 would be located on server 1, 26-50 on server 2 >> and so on. >> Now, if a 5th server is added to the cluster, then the ranges will >> change: files 1-20 will be located on server 1, 21-40 on server 2 and so on. >> >> The questions regarding this scenarios are as follows: >> 1 - Does the servers update the clients that an additional server (or >> brick) has been added to the cluster? If not, how does this happen? >> > > Yes, addition of a brick happens through a gluster cli command that > updates the volume info in glusterd. Glusterd(the one which updated config > and other peers) update clients about this change. > > 2 - Does the server also know which files *should* be located on them? if >> so, does the servers create a link file (which specifies the "real" >> location of the file) for the files that are supposed to be moved (e.g. >> files 21-25) or actually move the data right away? Maybe this works in a >> completely different manner? >> > > The addition of a brick has a step for updating the xattrs on the bricks > which marks the range for them. The creation of link files happens lazily. > Clients look up on all bricks when they don't find the file on the brick > where it is supposed to be(called hashed brick), the brick where they find > the file is called cached brick and a link file is created. > > To add to this, directories which were created before the bricks were added will not include the new bricks in the layout until a rebalance or fix-layout is run. Directories created after the add-brick will include the newly added bricks in the range. > For more information on distribute mechanism refer to > https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#dhtdistributed-hash-table-translator > For more information on how clients get update from glusterd refer to > https://www.youtube.com/watch?v=Gq-yBYq8Gjg > > >> I have additional questions regarding this, but they are dependent om the >> answers to these question. >> >> Thank you all for your help. >> -- >> *Barak Sason Rofman* >> >> Gluster Storage Development >> >> Red Hat Israel >> >> 34 Jerusalem rd. Ra'anana, 43501 >> >> bsasonro at redhat.com T: *+972-9-7692304* >> M: *+972-52-4326355* >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Sep 9 01:45:02 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 9 Sep 2019 01:45:02 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <864175082.10.1567993503314.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1743195 / core: can't start gluster after upgrade from 5 to 6 https://bugzilla.redhat.com/1744883 / core: GlusterFS problem dataloss https://bugzilla.redhat.com/1749272 / disperse: The version of the file in the disperse volume created with different nodes is incorrect https://bugzilla.redhat.com/1747844 / distribute: Rebalance doesn't work correctly if performance.parallel-readdir on and with some other specific options set https://bugzilla.redhat.com/1746140 / geo-replication: geo-rep: Changelog archive file format is incorrect https://bugzilla.redhat.com/1743215 / glusterd: glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied] https://bugzilla.redhat.com/1749625 / glusterd: [GlusterFS 6.1] GlusterFS client process crash https://bugzilla.redhat.com/1746615 / glusterd: SSL Volumes Fail Intermittently in 6.5 https://bugzilla.redhat.com/1747414 / libglusterfsclient: EIO error on check_and_dump_fuse_W call https://bugzilla.redhat.com/1741402 / posix-acl: READDIRP incorrectly updates posix-acl inode ctx https://bugzilla.redhat.com/1741899 / replicate: the volume of occupied space in the bricks of gluster volume (3 nodes replica) differs on nodes and the healing does not fix it https://bugzilla.redhat.com/1745916 / rpc: glusterfs client process memory leak after enable tls on community version 6.5 https://bugzilla.redhat.com/1740413 / rpc: Gluster volume bricks crashes when running a security scan on glusterfs ports https://bugzilla.redhat.com/1748205 / selfheal: null gfid entries can not be healed https://bugzilla.redhat.com/1749369 / write-behind: Segmentation fault occurs while truncate file [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2081 bytes Desc: not available URL: From spamecha at redhat.com Tue Sep 10 03:40:01 2019 From: spamecha at redhat.com (spamecha at redhat.com) Date: Tue, 10 Sep 2019 03:40:01 +0000 Subject: [Gluster-devel] Invitation: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 ... @ Tue Sep 10, 2019 11:30am - 12:25pm (IST) (gluster-devel@gluster.org) Message-ID: <000000000000c5a48905922aa878@google.com> You have been invited to the following event. Title: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 11:30am - 12:25pm Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/_7IjU1CXTimG2p8K3GBUCg Previous Meeting notes: https://github.com/gluster/community Flash talk: No Flash Talk When: Tue Sep 10, 2019 11:30am ? 12:25pm India Standard Time - Kolkata Where: https://bluejeans.com/836554017, bangalore-engg-rashtrakuta-12-p-vc Calendar: gluster-devel at gluster.org Who: * spamecha at redhat.com - organizer * risjain at redhat.com * gluster-users at gluster.org * gluster-devel at gluster.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=N2FhOHB0bjVoZXF0bnQxbWFpMmxva2lqbXYgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjc3BhbWVjaGFAcmVkaGF0LmNvbTdmNzZjNjdkMDNjYzFkMTE0OTJhNTQ4ZjVmMjY4NWViZGFkNTFiMmU&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2218 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2264 bytes Desc: not available URL: From spamecha at redhat.com Tue Sep 10 03:41:21 2019 From: spamecha at redhat.com (spamecha at redhat.com) Date: Tue, 10 Sep 2019 03:41:21 +0000 Subject: [Gluster-devel] Updated invitation: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 ... @ Tue Sep 10, 2019 11:30am - 12:25pm (IST) (gluster-devel@gluster.org) Message-ID: <00000000000091681305922aade8@google.com> This event has been changed. Title: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 11:30am - 12:25pm  Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/_7IjU1CXTimG2p8K3GBUCg Previous Meeting notes: https://github.com/gluster/community Flash talk: No Flash Talk (changed) When: Tue Sep 10, 2019 11:30am ? 12:25pm India Standard Time - Kolkata Where: https://bluejeans.com/836554017, bangalore-engg-rashtrakuta-12-p-vc Calendar: gluster-devel at gluster.org Who: * spamecha at redhat.com - organizer * risjain at redhat.com * gluster-users at gluster.org * gluster-devel at gluster.org * barchu02 at unm.edu Event details: https://www.google.com/calendar/event?action=VIEW&eid=N2FhOHB0bjVoZXF0bnQxbWFpMmxva2lqbXYgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjc3BhbWVjaGFAcmVkaGF0LmNvbTdmNzZjNjdkMDNjYzFkMTE0OTJhNTQ4ZjVmMjY4NWViZGFkNTFiMmU&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2656 bytes Desc: not available URL: From spamecha at redhat.com Tue Sep 10 05:58:27 2019 From: spamecha at redhat.com (spamecha at redhat.com) Date: Tue, 10 Sep 2019 05:58:27 +0000 Subject: [Gluster-devel] Updated invitation: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 ... @ Tue Sep 10, 2019 11:30am - 12:25pm (IST) (gluster-devel@gluster.org) Message-ID: <000000000000dc4f3405922c974b@google.com> This event has been changed. Title: Invitation: Gluster Community Meeting @ Tue Sep 10, 2019 11:30am - 12:25pm  Bridge: https://bluejeans.com/spamecha Minutes meeting: https://hackmd.io/_7IjU1CXTimG2p8K3GBUCg Previous Meeting notes: https://github.com/gluster/community Flash talk: No Flash Talk (changed) When: Tue Sep 10, 2019 11:30am ? 12:25pm India Standard Time - Kolkata Where: https://bluejeans.com/spamecha, bangalore-engg-rashtrakuta-12-p-vc (changed) Calendar: gluster-devel at gluster.org Who: * spamecha at redhat.com - organizer * risjain at redhat.com * gluster-users at gluster.org * gluster-devel at gluster.org * barchu02 at unm.edu * alpha754293 at hotmail.com * dkhandel at redhat.com * sacharya at redhat.com * hgowtham at redhat.com * pauyeung at connexity.com * gabriel.lindeborg at svenskaspel.se * xiedanming at qiyi.com * pierre-marie.janvre at agoda.com Event details: https://www.google.com/calendar/event?action=VIEW&eid=N2FhOHB0bjVoZXF0bnQxbWFpMmxva2lqbXYgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjc3BhbWVjaGFAcmVkaGF0LmNvbTdmNzZjNjdkMDNjYzFkMTE0OTJhNTQ4ZjVmMjY4NWViZGFkNTFiMmU&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3703 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 3771 bytes Desc: not available URL: From spamecha at redhat.com Tue Sep 10 06:52:13 2019 From: spamecha at redhat.com (Sheetal Pamecha) Date: Tue, 10 Sep 2019 12:22:13 +0530 Subject: [Gluster-devel] Minutes of Gluster Community Meeting (APAC) 10th September 2019 Message-ID: Hi, The minutes of the meeting are as follows: Gluster Community Meeting - 10/09/2019Previous Meeting minutes: - http://github.com/gluster/community - Recording of this meeting - No recording available for this meeting due to technical issues Date/Time: Check the community calendar Bridge - APAC friendly hours - Tuesday 10th September 2019, 11:30AM IST - Bridge: https://bluejeans.com/9461957313 - NA/EMEA - Every 1st and 3rd Tuesday at 01:00 PM EDT - Bridge: https://bluejeans.com/486278655 ------------------------------ Attendance Name (#gluster-dev alias) - company Sheetal Pamecha (#spamecha) - Red Hat Sunny Kumar (sunny) - Red Hat Rishubh Jain (risjain) - Red Hat Ravi (@itisravi) Red Hat Sanju Rakonde (srakonde) - RedHat Rinku Kothiya (rinku) - RedHat Ashhadul Islam (aislam) - Redhat Vishal Pandey (vpandey) - RedHat Ashish Pandey (_apandey) -RedHat Sunil Kumar Acharya - RedHat Shwetha Acharya (sacharya) - RedHat Hari Gowtham (hgowtham) -Red Hat User stories - None Community - Project metrics: Metrics Value Coverity 65 Clang Scan 59 Test coverage 70.9 New Bugs in last 14 days master 7.x 6.x 5.x 6 2 7 3 Gluster User Queries in last 14 days 52 Total Bugs 350 Total Github issues 399 - Any release updates? - We are yet to merge some patches which are present in release6 but not in release7. We are waiting for it to pass centos regression. - Blocker issues across the project? nil - Notable thread form mailing list nil Conferences / Meetups - Gluster Meetup on September 25th Register here: https://www.meetup.com/glusterfs-India/events/264366771/* - Developers? Conference - {Date} - Important dates: CFP Opens: CFP Closes: CFP Status Notifications: Schedule Announcement: Event Opens for Registration : Event dates: Venue: GlusterFS - v7.0 and beyond - Proposal - https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing - Proposed Plan: - GlusterFS-7.0 (July 1st) - Stability, Automation - Only - GlusterFS-8.0 (Nov 1st) - - Plan for Fedora 31/RHEL8.2 - GlusterFS-9.0 (March 1st, 2020) Reflink, io_uring, and similar improvements. The maintainers have to file a github issue about what they want to work on for release 8. the planning has to be started. Developer focus - Any design specs to discuss? - nil Component status - Arbiter - no updates - AFR - no updates - DHT - no updates - EC - fixed new data corruption issues and working on some scripts for automation for handling edge cases - FUSE - no updates - POSIX - no updates - DOC - no updates - Geo Replication - no updates - libglusterfs - no updates - glusterd - no updates - Snapshot - no updates - NFS - no updates - thin-arbiter - Flash Talk Gluster - Typical 5 min talk about Gluster with up to 5 more minutes for questions Recent Blog posts / Document updates - https://medium.com/@tumballi/fixing-glusters-git-history-6031096f6120 - https://medium.com/@rune.henriksen.skat/hey-wilson-wilson-3540521ecfc0 - https://shwetha174.blogspot.com/search/label/Gluster Gluster Friday Five - Every friday we release this, which basically covers highlight of week in gluster.Also you can find more videos in youtube link. https://www.youtube.com/channel/UCfilWh0JA5NfCjbqq1vsBVA Host Sanju will host next meeting - Who will host next meeting? - Host will need to send out the agenda 24hr - 12hrs in advance to mailing list, and also make sure to send the meeting minutes. - Host will need to reach out to one user at least who can talk about their usecase, their experience, and their needs. - Host needs to send meeting minutes as PR to http://github.com/gluster/community - The readme gives detailed steps for the host to follow. Notetaker - Who will take notes from the next meeting? RoundTable Sunny - Please RSVP to gluster meet up Yaniv will talk about Gluster X and Aravinda will talk about kaDalu Sunny - Should we start using hangouts for video calls than using bluejens? Ravi - Hangouts may have some limitations on no of people that can join meeting, which should be checked Deepshika - Hangounts video is not integrated with Red Hat conference calling system - Gluster X Action Items on host - Check-in Minutes of meeting for this meeting Regards, Sheetal Pamecha -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarts at gmail.com Thu Sep 12 09:56:17 2019 From: amarts at gmail.com (Amar Tumballi) Date: Thu, 12 Sep 2019 15:26:17 +0530 Subject: [Gluster-devel] kadalu - k8s storage with Gluster Message-ID: Hi Gluster users, I am not sure how many of you use Gluster for your k8s storage (or even considering to use). I have some good news for you. Last month, I along with Aravinda spoke at DevConf India, about project kadalu. The code & README available @ https://github.com/kadalu/kadalu. We are awaiting the talk's video to be uploaded, and once done I will share the link here. Wanted to share few highlights of the kadalu project with you all, and also future scope of work. - kadalu comes with *CSI driver*, so one can use this smoothly with k8s 1.14+ versions. - Has an *operator* which starts CSI drivers, and Gluster storage pod when required. - 2 commands to setup and get k8s storage working. - kubectl create -f kadalu-operator.yml - kubectl create -f kadalu-config.yml - Native support for single disk use-case (ie, if your backend supports High Availability, no need to use Gluster's replication), which I believe is a good thing for people who already have some storage array which is highly available, and for those companies which have their own storage products, but doesn't have k8s expose. - The above can be usecase can be used on a single AWS EBS volume, if you want to save cost of Replica 3 (If you trust it to provide your required SLA for it). Here, Single EBS volume would provide multiple k8s PVs. - GlusterFS used is very light mode, ie, no 'glusterd', no LVM, or any other layers. Only using glusterfs for filesystem, not management. - Basic end to end testing is done using Travis CI/CD. [Need more help to enhance it further]. More on this in our presentation @ https://github.com/kadalu/kadalu/blob/master/doc/rethinking-gluster-management-using-k8s.pdf Please note that this is a project which we started as a prototype for our talk. To take it further, feedback, feature request, suggestions and contributions are very important. Let me know if you are interested to collaborate on this one. Possible future work: * Implement data backup features (possibly with geo-rep). * Resize of Volume (Both backend Gluster volume, and PV volume). * Consider implementing helm chart for operator. * Scale testing, etc. Limitations (for now) * No 'migration'. One has to start fresh with kadalu. * No Snapshot, No cloning. * As of now, there are 2 deployment guides available @ https://github.com/kadalu/kadalu-cookbook Thanks & Regards, Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Sep 16 01:45:02 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 16 Sep 2019 01:45:02 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <627993897.21.1568598303136.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1750265 / cli: configure name server in host will cause cli command hanging https://bugzilla.redhat.com/1743195 / core: can't start gluster after upgrade from 5 to 6 https://bugzilla.redhat.com/1744883 / core: GlusterFS problem dataloss https://bugzilla.redhat.com/1749272 / disperse: The version of the file in the disperse volume created with different nodes is incorrect https://bugzilla.redhat.com/1747844 / distribute: Rebalance doesn't work correctly if performance.parallel-readdir on and with some other specific options set https://bugzilla.redhat.com/1751575 / encryption-xlator: File corruption in encrypted volume during read operation https://bugzilla.redhat.com/1746140 / geo-replication: geo-rep: Changelog archive file format is incorrect https://bugzilla.redhat.com/1743215 / glusterd: glusterd-utils: 0-management: xfs_info exited with non-zero exit status [Permission denied] https://bugzilla.redhat.com/1746615 / glusterd: SSL Volumes Fail Intermittently in 6.5 https://bugzilla.redhat.com/1747414 / libglusterfsclient: EIO error on check_and_dump_fuse_W call https://bugzilla.redhat.com/1751907 / posix: bricks gone down unexpectedly https://bugzilla.redhat.com/1749625 / rpc: [GlusterFS 6.1] GlusterFS brick process crash https://bugzilla.redhat.com/1745916 / rpc: glusterfs client process memory leak after enable tls on community version 6.5 https://bugzilla.redhat.com/1748205 / selfheal: null gfid entries can not be healed https://bugzilla.redhat.com/1750052 / unclassified: GlusterFS repository broken https://bugzilla.redhat.com/1749369 / write-behind: Segmentation fault occurs while truncate file [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2053 bytes Desc: not available URL: From ypadia at redhat.com Mon Sep 16 07:22:56 2019 From: ypadia at redhat.com (Yati Padia) Date: Mon, 16 Sep 2019 12:52:56 +0530 Subject: [Gluster-devel] request for access Message-ID: Hello, I wanted to contribute to the issues in this community and hence request to grant the access to work as team in glusterfs. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunkumar at redhat.com Tue Sep 17 13:13:55 2019 From: sunkumar at redhat.com (Sunny Kumar) Date: Tue, 17 Sep 2019 18:43:55 +0530 Subject: [Gluster-devel] Gluster meetup: India In-Reply-To: References: Message-ID: Hi folks, A gentle reminder! Please do RSVP, if planning to attained. /sunny On Wed, Aug 28, 2019 at 6:21 PM Sunny Kumar wrote: > > Hello folks, > > We are hosting Gluster meetup at our office (Redhat-BLR-IN) on 25th > September 2019. > > Please find the agenda and location detail here [1] and plan accordingly. > > The highlight of this event will be Gluster -X we will keep on > updating agenda with topics, so keep an eye on it. > > Note: > * RSVP as YES if attending, this will help us to organize the > facilities better. > > If you have any question, please reach out to me or comment on the > event page [1]. > > Feel free to share this meetup via other channels. > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/ > > > /sunny From dkhandel at redhat.com Thu Sep 19 09:30:59 2019 From: dkhandel at redhat.com (Deepshikha Khandelwal) Date: Thu, 19 Sep 2019 15:00:59 +0530 Subject: [Gluster-devel] Jenkins upgrade today Message-ID: Hi, We have planned a jenkins instance build.gluster.org upgrade today to the newer stable version so as to pull in the latest plugins updates which fixes security vulnerabilities. It will stop all the running jobs and will be unavailable during the downtime window. The downtime window will be from: IST 6:00-7:00 PM UTC 12:30-1:30 PM Please plan accordingly. Thank you, Deepshikha -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkothiya at redhat.com Fri Sep 20 12:31:56 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Fri, 20 Sep 2019 18:01:56 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day (26th Sep 2019) Message-ID: Hi, Release-7 RC1 packages are built. We are planning to have a test day on 26-Sep-2019, we request your participation. Do post on the lists any testing done and feedback for the same. Packages for Fedora 29, Fedora 30, RHEL 8, CentOS at https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc1/ Packages are signed. The public key is at https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub Regards Rinku -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkeithle at redhat.com Fri Sep 20 13:19:24 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Fri, 20 Sep 2019 09:19:24 -0400 Subject: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day (26th Sep 2019) In-Reply-To: References: Message-ID: On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya wrote: > Hi, > > Release-7 RC1 packages are built. We are planning to have a test day on > 26-Sep-2019, we request your participation. Do post on the lists any > testing done and feedback for the same. > > Packages for Fedora 29, Fedora 30, RHEL 8, CentOS at > https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc1/ > > Packages are signed. The public key is at > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub > FYI, there are no CentOS packages there, but there are Debian stretch and Debian buster packages. Packages for CentOS 7 are built in CentOS CBS at https://cbs.centos.org/koji/buildinfo?buildID=26538 but I don't see them in https://buildlogs.centos.org/centos/7/storage/x86_64/. @Niels, shouldn't we expect them in buildlogs? -- Kaleb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Fri Sep 20 13:34:18 2019 From: ndevos at redhat.com (Niels de Vos) Date: Fri, 20 Sep 2019 15:34:18 +0200 Subject: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day (26th Sep 2019) In-Reply-To: References: Message-ID: <20190920133418.GD10884@ndevos-x270> On Fri, Sep 20, 2019 at 09:19:24AM -0400, Kaleb Keithley wrote: > On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya wrote: > > > Hi, > > > > Release-7 RC1 packages are built. We are planning to have a test day on > > 26-Sep-2019, we request your participation. Do post on the lists any > > testing done and feedback for the same. > > > > Packages for Fedora 29, Fedora 30, RHEL 8, CentOS at > > https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc1/ > > > > Packages are signed. The public key is at > > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub > > > > FYI, there are no CentOS packages there, but there are Debian stretch and > Debian buster packages. > > Packages for CentOS 7 are built in CentOS CBS at > https://cbs.centos.org/koji/buildinfo?buildID=26538 but I don't see them in > https://buildlogs.centos.org/centos/7/storage/x86_64/. > > @Niels, shouldn't we expect them in buildlogs? Ai, it seems the requested configuration for syncing is not applied yet: - https://bugs.centos.org/view.php?id=16363 I've now pinged in #centos-devel on Freenode to get some attention to the request. Thanks, Niels From jenkins at build.gluster.org Mon Sep 23 01:45:08 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 23 Sep 2019 01:45:08 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1750564541.5.1569203120075.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1750265 / cli: configure name server in host will cause cli command hanging https://bugzilla.redhat.com/1753994 / core: Mtime is not updated on setting it to older date online when sharding enabled https://bugzilla.redhat.com/1749272 / disperse: The version of the file in the disperse volume created with different nodes is incorrect https://bugzilla.redhat.com/1747844 / distribute: Rebalance doesn't work correctly if performance.parallel-readdir on and with some other specific options set https://bugzilla.redhat.com/1751575 / encryption-xlator: File corruption in encrypted volume during read operation https://bugzilla.redhat.com/1746140 / geo-replication: geo-rep: Changelog archive file format is incorrect https://bugzilla.redhat.com/1746615 / glusterd: SSL Volumes Fail Intermittently in 6.5 https://bugzilla.redhat.com/1753569 / libgfapi: git clone fails on gluster volumes exported via nfs-ganesha https://bugzilla.redhat.com/1747414 / libglusterfsclient: EIO error on check_and_dump_fuse_W call https://bugzilla.redhat.com/1753587 / project-infrastructure: https://build.gluster.org/job/compare-bug-version-and-git-branch/41059/ fails for public BZ https://bugzilla.redhat.com/1754017 / project-infrastructure: request a user account for the blog on gluster.org https://bugzilla.redhat.com/1749625 / rpc: [GlusterFS 6.1] GlusterFS brick process crash https://bugzilla.redhat.com/1745916 / rpc: glusterfs client process memory leak after enable tls on community version 6.5 https://bugzilla.redhat.com/1748205 / selfheal: null gfid entries can not be healed https://bugzilla.redhat.com/1753413 / selfheal: Self-heal daemon crashes https://bugzilla.redhat.com/1750052 / unclassified: GlusterFS repository broken https://bugzilla.redhat.com/1749369 / write-behind: Segmentation fault occurs while truncate file [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2247 bytes Desc: not available URL: From srakonde at redhat.com Mon Sep 23 07:24:01 2019 From: srakonde at redhat.com (srakonde at redhat.com) Date: Mon, 23 Sep 2019 07:24:01 +0000 Subject: [Gluster-devel] Invitation: Glusterfs Community meeting @ Tue Sep 24, 2019 11:30am - 12:20pm (IST) (gluster-devel@gluster.org) Message-ID: <000000000000cb3d880593334d74@google.com> You have been invited to the following event. Title: Glusterfs Community meeting Meeting minutes: https://hackmd.io/fDnZwqYlQd-yJeTdzSHwSA Previous meeting minutes: https://github.com/gluster/community When: Tue Sep 24, 2019 11:30am ? 12:20pm India Standard Time - Kolkata Where: https://bluejeans.com/118564314, bangalore-engg-rashtrakuta-12-p-vc Calendar: gluster-devel at gluster.org Who: * srakonde at redhat.com - organizer * gluster-users at gluster.org * gluster-devel at gluster.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=NmdmODU0amFpbmdsczM2dGdqbGlkdXRmNmkgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjc3Jha29uZGVAcmVkaGF0LmNvbTdjMTM5OTkyMmFlOTNiZTQ1Y2U1ZDNmMTc5NTY2MGYyNWFkZjAxODU&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1988 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 1988 bytes Desc: not available URL: From sacharya at redhat.com Mon Sep 23 10:43:57 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Mon, 23 Sep 2019 16:13:57 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty Message-ID: Hi All, I am planning to work on this bugzilla issue. Here, when we restore the snapshots, and start the geo-replication session, we see that the geo-replication goes faulty. It is mainly because, the brick path of original session and the session after snapshot restore will be different. There is a proposed work around for this issue, according to which we replace the old brick path with new brick path inside the index file HTIME.xxxxxxxxxx, which basically solves the issue. I have some doubts regarding the same. We are going with the work around from a long time. Are there any limitations stopping us from implementing solution for this, which I am currently unaware of? Is it important to have paths inside index file? Can we eliminate the paths inside them? Is there any concerns from snapshot side? Are there any other general concerns regarding the same? Regards, Shwetha -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkavunga at redhat.com Mon Sep 23 12:23:35 2019 From: rkavunga at redhat.com (RAFI KC) Date: Mon, 23 Sep 2019 17:53:35 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: References: Message-ID: <12b3aca6-2501-80d5-1052-be80f817ebc1@redhat.com> On 9/23/19 4:13 PM, Shwetha Acharya wrote: > Hi All, > I am planning to work on this > ?bugzilla issue. > Here, when we restore the snapshots, and start the geo-replication > session, we see that the geo-replication goes faulty. It is mainly > because, the brick path of original session and the session after > snapshot restore will be different. There is a proposed work around > for this issue, according to which we replace the old brick path with > new brick path inside the index file HTIME.xxxxxxxxxx, which basically > solves the issue. > > I have some doubts regarding the same. > We are going with the work around from a long time. Are there any > limitations stopping us from implementing solution for this, which I > am currently unaware of? > Is it important to have paths inside index file? Can we eliminate the > paths inside them? > Is there any concerns from snapshot side? Can you please explain how we are planning to replace the path in the index file. Did we finalized the method? The problem here is that any time consuming operation within the glusterd transaction could be a difficult. Rafi > Are there any other general concerns regarding the same? > > Regards, > Shwetha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacharya at redhat.com Mon Sep 23 12:39:49 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Mon, 23 Sep 2019 18:09:49 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: <12b3aca6-2501-80d5-1052-be80f817ebc1@redhat.com> References: <12b3aca6-2501-80d5-1052-be80f817ebc1@redhat.com> Message-ID: We are thinking of eliminating path from index file, instead of replacing it. We need to further see if it is feasible to do so. I am looking into it. @Aravinda Vishwanathapura Krishna Murthy @Kotresh Hiremath Ravishankar Any pointers on this? Regards, Shwetha On Mon, Sep 23, 2019 at 5:53 PM RAFI KC wrote: > > On 9/23/19 4:13 PM, Shwetha Acharya wrote: > > Hi All, > I am planning to work on this > bugzilla issue. > Here, when we restore the snapshots, and start the geo-replication > session, we see that the geo-replication goes faulty. It is mainly because, > the brick path of original session and the session after snapshot restore > will be different. There is a proposed work around for this issue, > according to which we replace the old brick path with new brick path inside > the index file HTIME.xxxxxxxxxx, which basically solves the issue. > > I have some doubts regarding the same. > We are going with the work around from a long time. Are there any > limitations stopping us from implementing solution for this, which I am > currently unaware of? > Is it important to have paths inside index file? Can we eliminate the > paths inside them? > Is there any concerns from snapshot side? > > Can you please explain how we are planning to replace the path in the > index file. Did we finalized the method? The problem here is that any time > consuming operation within the glusterd transaction could be a difficult. > > Rafi > > Are there any other general concerns regarding the same? > > Regards, > Shwetha > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkavunga at redhat.com Mon Sep 23 12:42:28 2019 From: rkavunga at redhat.com (RAFI KC) Date: Mon, 23 Sep 2019 18:12:28 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: References: <12b3aca6-2501-80d5-1052-be80f817ebc1@redhat.com> Message-ID: If that is the case, then we don't need to do anything from snapshot point of view. Regards Rafi KC On 9/23/19 6:09 PM, Shwetha Acharya wrote: > We are thinking of eliminating path from index file, instead of > replacing it. We need to further see if it is feasible to do so. I am > looking into it. @Aravinda Vishwanathapura Krishna Murthy > @Kotresh Hiremath Ravishankar > ? Any pointers on this? > > Regards, > Shwetha > > On Mon, Sep 23, 2019 at 5:53 PM RAFI KC > wrote: > > > On 9/23/19 4:13 PM, Shwetha Acharya wrote: >> Hi All, >> I am planning to work on this >> ?bugzilla >> issue. >> Here, when we restore the snapshots, and start the >> geo-replication session, we see that the geo-replication goes >> faulty. It is mainly because, the brick path of original session >> and the session after snapshot restore will be different. There >> is a proposed work around for this issue, according to which we >> replace the old brick path with new brick path inside the index >> file HTIME.xxxxxxxxxx, which basically solves the issue. >> >> I have some doubts regarding the same. >> We are going with the work around from a long time. Are there any >> limitations stopping us from implementing solution for this, >> which I am currently unaware of? >> Is it important to have paths inside index file? Can we eliminate >> the paths inside them? >> Is there any concerns from snapshot side? > > Can you please explain how we are planning to replace the path in > the index file. Did we finalized the method? The problem here is > that any time consuming operation within the glusterd transaction > could be a difficult. > > Rafi > >> Are there any other general concerns regarding the same? >> >> Regards, >> Shwetha > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacharya at redhat.com Mon Sep 23 12:45:37 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Mon, 23 Sep 2019 18:15:37 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: References: <12b3aca6-2501-80d5-1052-be80f817ebc1@redhat.com> Message-ID: Thank you for the clarification. On Mon, Sep 23, 2019 at 6:12 PM RAFI KC wrote: > If that is the case, then we don't need to do anything from snapshot point > of view. > > > Regards > > Rafi KC > On 9/23/19 6:09 PM, Shwetha Acharya wrote: > > We are thinking of eliminating path from index file, instead of replacing > it. We need to further see if it is feasible to do so. I am looking into > it. @Aravinda Vishwanathapura Krishna Murthy @Kotresh > Hiremath Ravishankar Any pointers on this? > > Regards, > Shwetha > > On Mon, Sep 23, 2019 at 5:53 PM RAFI KC wrote: > >> >> On 9/23/19 4:13 PM, Shwetha Acharya wrote: >> >> Hi All, >> I am planning to work on this >> bugzilla issue. >> Here, when we restore the snapshots, and start the geo-replication >> session, we see that the geo-replication goes faulty. It is mainly because, >> the brick path of original session and the session after snapshot restore >> will be different. There is a proposed work around for this issue, >> according to which we replace the old brick path with new brick path inside >> the index file HTIME.xxxxxxxxxx, which basically solves the issue. >> >> I have some doubts regarding the same. >> We are going with the work around from a long time. Are there any >> limitations stopping us from implementing solution for this, which I am >> currently unaware of? >> Is it important to have paths inside index file? Can we eliminate the >> paths inside them? >> Is there any concerns from snapshot side? >> >> Can you please explain how we are planning to replace the path in the >> index file. Did we finalized the method? The problem here is that any time >> consuming operation within the glusterd transaction could be a difficult. >> >> Rafi >> >> Are there any other general concerns regarding the same? >> >> Regards, >> Shwetha >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From avishwan at redhat.com Tue Sep 24 02:49:51 2019 From: avishwan at redhat.com (Aravinda Vishwanathapura Krishna Murthy) Date: Tue, 24 Sep 2019 08:19:51 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: References: Message-ID: Hi Shwetha, Good to see this bug is picked up. You are right, and the fix should be to remove the path from HTIME file. RFE is already available here https://github.com/gluster/glusterfs/issues/76 There is one more RFE about optimizing Changelogs storage. Currently, all changelogs are stored in a single directory, so this needs to be changed. This affects the above RFE, instead of storing a complete changelog path in HTIME file store with the prefix used in this RFE. https://github.com/gluster/glusterfs/issues/154 These two RFE's to be worked together. One major issue with format change is to handle the upgrades. Workaround script to be used to upgrade existing HTIME file and new directory structure of Changelog files. Let me know if you have any questions. On Mon, Sep 23, 2019 at 4:14 PM Shwetha Acharya wrote: > Hi All, > I am planning to work on this > bugzilla issue. > Here, when we restore the snapshots, and start the geo-replication > session, we see that the geo-replication goes faulty. It is mainly because, > the brick path of original session and the session after snapshot restore > will be different. There is a proposed work around for this issue, > according to which we replace the old brick path with new brick path inside > the index file HTIME.xxxxxxxxxx, which basically solves the issue. > > I have some doubts regarding the same. > We are going with the work around from a long time. Are there any > limitations stopping us from implementing solution for this, which I am > currently unaware of? > Is it important to have paths inside index file? Can we eliminate the > paths inside them? > Is there any concerns from snapshot side? > Are there any other general concerns regarding the same? > > Regards, > Shwetha > -- regards Aravinda VK -------------- next part -------------- An HTML attachment was scrubbed... URL: From sacharya at redhat.com Tue Sep 24 05:39:00 2019 From: sacharya at redhat.com (Shwetha Acharya) Date: Tue, 24 Sep 2019 11:09:00 +0530 Subject: [Gluster-devel] Geo-rep start after snapshot restore makes the geo-rep faulty In-Reply-To: References: Message-ID: Hi Aravinda, Thanks for the update. I will get back to you in case of any queries. On Tue, Sep 24, 2019 at 8:20 AM Aravinda Vishwanathapura Krishna Murthy < avishwan at redhat.com> wrote: > Hi Shwetha, > > Good to see this bug is picked up. > > You are right, and the fix should be to remove the path from HTIME file. > RFE is already available here > https://github.com/gluster/glusterfs/issues/76 > > There is one more RFE about optimizing Changelogs storage. Currently, all > changelogs are stored in a single directory, so this needs to be changed. > This affects the above RFE, instead of storing a complete changelog path in > HTIME file store with the prefix used in this RFE. > > https://github.com/gluster/glusterfs/issues/154 > > These two RFE's to be worked together. > > One major issue with format change is to handle the upgrades. Workaround > script to be used to upgrade existing HTIME file and new directory > structure of Changelog files. > > Let me know if you have any questions. > > > On Mon, Sep 23, 2019 at 4:14 PM Shwetha Acharya > wrote: > >> Hi All, >> I am planning to work on this >> bugzilla issue. >> Here, when we restore the snapshots, and start the geo-replication >> session, we see that the geo-replication goes faulty. It is mainly because, >> the brick path of original session and the session after snapshot restore >> will be different. There is a proposed work around for this issue, >> according to which we replace the old brick path with new brick path inside >> the index file HTIME.xxxxxxxxxx, which basically solves the issue. >> >> I have some doubts regarding the same. >> We are going with the work around from a long time. Are there any >> limitations stopping us from implementing solution for this, which I am >> currently unaware of? >> Is it important to have paths inside index file? Can we eliminate the >> paths inside them? >> Is there any concerns from snapshot side? >> Are there any other general concerns regarding the same? >> >> Regards, >> Shwetha >> > > > -- > regards > Aravinda VK > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomkcpr at mdevsys.com Wed Sep 25 05:24:37 2019 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 25 Sep 2019 01:24:37 -0400 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node Message-ID: Hey All, I'm getting the below error when trying to start a 2 node Gluster cluster. I had the quorum enabled when I was at version 3.12 . However with this version it needed the quorum disabled. So I did so however now see the subject error. Any ideas what I could try next? -- Thx, TK. [2019-09-25 05:17:26.615203] D [MSGID: 0] [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0 [2019-09-25 05:17:26.615555] D [MSGID: 0] [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. Returning 0 [2019-09-25 05:17:26.616271] D [MSGID: 0] [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume mdsgv01 found [2019-09-25 05:17:26.616305] D [MSGID: 0] [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 [2019-09-25 05:17:26.616327] D [MSGID: 0] [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0 [2019-09-25 05:17:26.617056] I [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a fresh brick process for brick /mnt/p01-d01/glusterv01 [2019-09-25 05:17:26.722717] E [MSGID: 106005] [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 [2019-09-25 05:17:26.722960] D [MSGID: 0] [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107 [2019-09-25 05:17:26.723006] E [MSGID: 106122] [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed. [2019-09-25 05:17:26.723027] D [MSGID: 0] [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. Returning -107 [2019-09-25 05:17:26.723045] E [MSGID: 106122] [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node [2019-09-25 05:17:26.723073] D [MSGID: 0] [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx modification not required [2019-09-25 05:17:26.723141] E [MSGID: 106122] [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed [2019-09-25 05:17:26.723204] D [MSGID: 0] [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as mdsgv01_vol [2019-09-25 05:17:26.723239] D [MSGID: 0] [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for vol mdsgv01 successfully released [2019-09-25 05:17:26.723273] D [MSGID: 0] [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume mdsgv01 found [2019-09-25 05:17:26.723326] D [MSGID: 0] [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 [2019-09-25 05:17:26.723360] D [MSGID: 0] [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: Returning 0 ==> /var/log/glusterfs/cmd_history.log <== [2019-09-25 05:17:26.723390] : volume start mdsgv01 : FAILED : Commit failed on localhost. Please check log file for details. ==> /var/log/glusterfs/glusterd.log <== [2019-09-25 05:17:26.723479] D [MSGID: 0] [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: Returning 0 [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option ping-timeout 0 option event-threads 1 option rpc-auth-allow-insecure on # option cluster.server-quorum-type server # option cluster.quorum-type auto option server.event-threads 8 option client.event-threads 8 option performance.write-behind-window-size 8MB option performance.io-thread-count 16 option performance.cache-size 1GB option nfs.trusted-sync on option storage.owner-uid 36 option storage.owner-uid 36 option cluster.data-self-heal-algorithm full option performance.low-prio-threads 32 option features.shard-block-size 512MB option features.shard on end-volume [root at mdskvm-p01 glusterfs]# [root at mdskvm-p01 glusterfs]# gluster volume info Volume Name: mdsgv01 Type: Replicate Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 Status: Stopped Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 Options Reconfigured: storage.owner-gid: 36 cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-uid: 36 cluster.server-quorum-type: none cluster.quorum-type: none server.event-threads: 8 client.event-threads: 8 performance.write-behind-window-size: 8MB performance.io-thread-count: 16 performance.cache-size: 1GB nfs.trusted-sync: on server.allow-insecure: on performance.readdir-ahead: on diagnostics.brick-log-level: DEBUG diagnostics.brick-sys-log-level: INFO diagnostics.client-log-level: DEBUG [root at mdskvm-p01 glusterfs]# From rkothiya at redhat.com Wed Sep 25 07:15:10 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Wed, 25 Sep 2019 12:45:10 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day Postponed Message-ID: Hi, As we are planning to do a RC2 for release-7 we would want to postpone the test day event. Hence we wont be having test day on 26-Sep-2019. I will keep you posted on the rescheduled date of test day. Regards Rinku -------------- next part -------------- An HTML attachment was scrubbed... URL: From srakonde at redhat.com Wed Sep 25 09:08:46 2019 From: srakonde at redhat.com (Sanju Rakonde) Date: Wed, 25 Sep 2019 14:38:46 +0530 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: References: Message-ID: Hi, The below errors indicate that brick process is failed to start. Please attach brick log. [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a fresh brick process for brick /mnt/p01-d01/glusterv01 [2019-09-25 05:17:26.722717] E [MSGID: 106005] [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 [2019-09-25 05:17:26.722960] D [MSGID: 0] [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107 [2019-09-25 05:17:26.723006] E [MSGID: 106122] [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed. On Wed, Sep 25, 2019 at 11:00 AM TomK wrote: > Hey All, > > I'm getting the below error when trying to start a 2 node Gluster cluster. > > I had the quorum enabled when I was at version 3.12 . However with this > version it needed the quorum disabled. So I did so however now see the > subject error. > > Any ideas what I could try next? > > -- > Thx, > TK. > > > [2019-09-25 05:17:26.615203] D [MSGID: 0] > [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0 > [2019-09-25 05:17:26.615555] D [MSGID: 0] > [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. > Returning 0 > [2019-09-25 05:17:26.616271] D [MSGID: 0] > [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > mdsgv01 found > [2019-09-25 05:17:26.616305] D [MSGID: 0] > [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 > [2019-09-25 05:17:26.616327] D [MSGID: 0] > [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0 > [2019-09-25 05:17:26.617056] I > [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a > fresh brick process for brick /mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722717] E [MSGID: 106005] > [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to > start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722960] D [MSGID: 0] > [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107 > [2019-09-25 05:17:26.723006] E [MSGID: 106122] > [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start > commit failed. > [2019-09-25 05:17:26.723027] D [MSGID: 0] > [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. > Returning -107 > [2019-09-25 05:17:26.723045] E [MSGID: 106122] > [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit > failed for operation Start on local node > [2019-09-25 05:17:26.723073] D [MSGID: 0] > [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx > modification not required > [2019-09-25 05:17:26.723141] E [MSGID: 106122] > [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] > 0-management: Commit Op Failed > [2019-09-25 05:17:26.723204] D [MSGID: 0] > [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to > release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as > mdsgv01_vol > [2019-09-25 05:17:26.723239] D [MSGID: 0] > [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for > vol mdsgv01 successfully released > [2019-09-25 05:17:26.723273] D [MSGID: 0] > [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > mdsgv01 found > [2019-09-25 05:17:26.723326] D [MSGID: 0] > [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 > [2019-09-25 05:17:26.723360] D [MSGID: 0] > [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: > Returning 0 > > ==> /var/log/glusterfs/cmd_history.log <== > [2019-09-25 05:17:26.723390] : volume start mdsgv01 : FAILED : Commit > failed on localhost. Please check log file for details. > > ==> /var/log/glusterfs/glusterd.log <== > [2019-09-25 05:17:26.723479] D [MSGID: 0] > [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: > Returning 0 > > > > [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol > volume management > type mgmt/glusterd > option working-directory /var/lib/glusterd > option transport-type socket,rdma > option transport.socket.keepalive-time 10 > option transport.socket.keepalive-interval 2 > option transport.socket.read-fail-log off > option ping-timeout 0 > option event-threads 1 > option rpc-auth-allow-insecure on > # option cluster.server-quorum-type server > # option cluster.quorum-type auto > option server.event-threads 8 > option client.event-threads 8 > option performance.write-behind-window-size 8MB > option performance.io-thread-count 16 > option performance.cache-size 1GB > option nfs.trusted-sync on > option storage.owner-uid 36 > option storage.owner-uid 36 > option cluster.data-self-heal-algorithm full > option performance.low-prio-threads 32 > option features.shard-block-size 512MB > option features.shard on > end-volume > [root at mdskvm-p01 glusterfs]# > > > [root at mdskvm-p01 glusterfs]# gluster volume info > > Volume Name: mdsgv01 > Type: Replicate > Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 > Status: Stopped > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 > Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > Options Reconfigured: > storage.owner-gid: 36 > cluster.data-self-heal-algorithm: full > performance.low-prio-threads: 32 > features.shard-block-size: 512MB > features.shard: on > storage.owner-uid: 36 > cluster.server-quorum-type: none > cluster.quorum-type: none > server.event-threads: 8 > client.event-threads: 8 > performance.write-behind-window-size: 8MB > performance.io-thread-count: 16 > performance.cache-size: 1GB > nfs.trusted-sync: on > server.allow-insecure: on > performance.readdir-ahead: on > diagnostics.brick-log-level: DEBUG > diagnostics.brick-sys-log-level: INFO > diagnostics.client-log-level: DEBUG > [root at mdskvm-p01 glusterfs]# > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- Thanks, Sanju -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomkcpr at mdevsys.com Wed Sep 25 10:48:57 2019 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 25 Sep 2019 06:48:57 -0400 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: References: Message-ID: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> Attached. On 9/25/2019 5:08 AM, Sanju Rakonde wrote: > Hi, The below errors indicate that brick process is failed to start. > Please attach brick log. > > [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a > fresh brick process for brick /mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722717] E [MSGID: 106005] > [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to > start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722960] D [MSGID: 0] > [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107 > [2019-09-25 05:17:26.723006] E [MSGID: 106122] > [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start > commit failed. > > On Wed, Sep 25, 2019 at 11:00 AM TomK > wrote: > > Hey All, > > I'm getting the below error when trying to start a 2 node Gluster > cluster. > > I had the quorum enabled when I was at version 3.12 .? However with > this > version it needed the quorum disabled.? So I did so however now see the > subject error. > > Any ideas what I could try next? > > -- > Thx, > TK. > > > [2019-09-25 05:17:26.615203] D [MSGID: 0] > [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0 > [2019-09-25 05:17:26.615555] D [MSGID: 0] > [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. > Returning 0 > [2019-09-25 05:17:26.616271] D [MSGID: 0] > [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > mdsgv01 found > [2019-09-25 05:17:26.616305] D [MSGID: 0] > [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 > [2019-09-25 05:17:26.616327] D [MSGID: 0] > [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0 > [2019-09-25 05:17:26.617056] I > [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a > fresh brick process for brick /mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722717] E [MSGID: 106005] > [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to > start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > [2019-09-25 05:17:26.722960] D [MSGID: 0] > [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning > -107 > [2019-09-25 05:17:26.723006] E [MSGID: 106122] > [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start > commit failed. > [2019-09-25 05:17:26.723027] D [MSGID: 0] > [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. > Returning -107 > [2019-09-25 05:17:26.723045] E [MSGID: 106122] > [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit > failed for operation Start on local node > [2019-09-25 05:17:26.723073] D [MSGID: 0] > [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx > modification not required > [2019-09-25 05:17:26.723141] E [MSGID: 106122] > [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] > 0-management: Commit Op Failed > [2019-09-25 05:17:26.723204] D [MSGID: 0] > [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to > release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as > mdsgv01_vol > [2019-09-25 05:17:26.723239] D [MSGID: 0] > [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for > vol mdsgv01 successfully released > [2019-09-25 05:17:26.723273] D [MSGID: 0] > [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > mdsgv01 found > [2019-09-25 05:17:26.723326] D [MSGID: 0] > [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0 > [2019-09-25 05:17:26.723360] D [MSGID: 0] > [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: > Returning 0 > > ==> /var/log/glusterfs/cmd_history.log <== > [2019-09-25 05:17:26.723390]? : volume start mdsgv01 : FAILED : Commit > failed on localhost. Please check log file for details. > > ==> /var/log/glusterfs/glusterd.log <== > [2019-09-25 05:17:26.723479] D [MSGID: 0] > [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: > Returning 0 > > > > [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol > volume management > ? ? ?type mgmt/glusterd > ? ? ?option working-directory /var/lib/glusterd > ? ? ?option transport-type socket,rdma > ? ? ?option transport.socket.keepalive-time 10 > ? ? ?option transport.socket.keepalive-interval 2 > ? ? ?option transport.socket.read-fail-log off > ? ? ?option ping-timeout 0 > ? ? ?option event-threads 1 > ? ? ?option rpc-auth-allow-insecure on > ? ? ?# option cluster.server-quorum-type server > ? ? ?# option cluster.quorum-type auto > ? ? ?option server.event-threads 8 > ? ? ?option client.event-threads 8 > ? ? ?option performance.write-behind-window-size 8MB > ? ? ?option performance.io-thread-count 16 > ? ? ?option performance.cache-size 1GB > ? ? ?option nfs.trusted-sync on > ? ? ?option storage.owner-uid 36 > ? ? ?option storage.owner-uid 36 > ? ? ?option cluster.data-self-heal-algorithm full > ? ? ?option performance.low-prio-threads 32 > ? ? ?option features.shard-block-size 512MB > ? ? ?option features.shard on > end-volume > [root at mdskvm-p01 glusterfs]# > > > [root at mdskvm-p01 glusterfs]# gluster volume info > > Volume Name: mdsgv01 > Type: Replicate > Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 > Status: Stopped > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 > Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > Options Reconfigured: > storage.owner-gid: 36 > cluster.data-self-heal-algorithm: full > performance.low-prio-threads: 32 > features.shard-block-size: 512MB > features.shard: on > storage.owner-uid: 36 > cluster.server-quorum-type: none > cluster.quorum-type: none > server.event-threads: 8 > client.event-threads: 8 > performance.write-behind-window-size: 8MB > performance.io-thread-count: 16 > performance.cache-size: 1GB > nfs.trusted-sync: on > server.allow-insecure: on > performance.readdir-ahead: on > diagnostics.brick-log-level: DEBUG > diagnostics.brick-sys-log-level: INFO > diagnostics.client-log-level: DEBUG > [root at mdskvm-p01 glusterfs]# > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > -- > Thanks, > Sanju -- Thx, TK. -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterd-logs.tar.gz Type: application/x-gzip Size: 683318 bytes Desc: not available URL: From tomkcpr at mdevsys.com Wed Sep 25 10:56:32 2019 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 25 Sep 2019 06:56:32 -0400 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> References: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> Message-ID: Brick log for specific gluster start command attempt (full log attached): [2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz --volfile-id mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid -S /var/run/gluster/defbdb699838d53b.socket --brick-name /mnt/p01-d01/glusterv01 -l /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 --process-name brick --brick-port 49155 --xlator-option mdsgv01-server.listen-port=49155) [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 23133 [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9 [2019-09-25 10:53:37.865940] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz [2019-09-25 10:53:37.866043] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: received signum (1), shutting down [2019-09-25 10:53:37.872399] I [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected (priv->connected = 0) [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs) [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: received signum (1), shutting down On 9/25/2019 6:48 AM, TomK wrote: > Attached. > > > On 9/25/2019 5:08 AM, Sanju Rakonde wrote: >> Hi, The below errors indicate that brick process is failed to start. >> Please attach brick log. >> >> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a >> fresh brick process for brick /mnt/p01-d01/glusterv01 >> [2019-09-25 05:17:26.722717] E [MSGID: 106005] >> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to >> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >> [2019-09-25 05:17:26.722960] D [MSGID: 0] >> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107 >> [2019-09-25 05:17:26.723006] E [MSGID: 106122] >> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start >> commit failed. >> >> On Wed, Sep 25, 2019 at 11:00 AM TomK > > wrote: >> >> ??? Hey All, >> >> ??? I'm getting the below error when trying to start a 2 node Gluster >> ??? cluster. >> >> ??? I had the quorum enabled when I was at version 3.12 .? However with >> ??? this >> ??? version it needed the quorum disabled.? So I did so however now >> see the >> ??? subject error. >> >> ??? Any ideas what I could try next? >> >> ??? -- ??? Thx, >> ??? TK. >> >> >> ??? [2019-09-25 05:17:26.615203] D [MSGID: 0] >> ??? [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: >> Returning 0 >> ??? [2019-09-25 05:17:26.615555] D [MSGID: 0] >> ??? [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP >> = 5. >> ??? Returning 0 >> ??? [2019-09-25 05:17:26.616271] D [MSGID: 0] >> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >> ??? mdsgv01 found >> ??? [2019-09-25 05:17:26.616305] D [MSGID: 0] >> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >> Returning 0 >> ??? [2019-09-25 05:17:26.616327] D [MSGID: 0] >> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: >> returning 0 >> ??? [2019-09-25 05:17:26.617056] I >> ??? [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a >> ??? fresh brick process for brick /mnt/p01-d01/glusterv01 >> ??? [2019-09-25 05:17:26.722717] E [MSGID: 106005] >> ??? [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to >> ??? start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >> ??? [2019-09-25 05:17:26.722960] D [MSGID: 0] >> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning >> ??? -107 >> ??? [2019-09-25 05:17:26.723006] E [MSGID: 106122] >> ??? [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start >> ??? commit failed. >> ??? [2019-09-25 05:17:26.723027] D [MSGID: 0] >> ??? [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. >> ??? Returning -107 >> ??? [2019-09-25 05:17:26.723045] E [MSGID: 106122] >> ??? [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit >> ??? failed for operation Start on local node >> ??? [2019-09-25 05:17:26.723073] D [MSGID: 0] >> ??? [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: >> op_ctx >> ??? modification not required >> ??? [2019-09-25 05:17:26.723141] E [MSGID: 106122] >> ??? [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] >> ??? 0-management: Commit Op Failed >> ??? [2019-09-25 05:17:26.723204] D [MSGID: 0] >> ??? [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: >> Trying to >> ??? release lock of vol mdsgv01 for >> f7336db6-22b4-497d-8c2f-04c833a28546 as >> ??? mdsgv01_vol >> ??? [2019-09-25 05:17:26.723239] D [MSGID: 0] >> ??? [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for >> ??? vol mdsgv01 successfully released >> ??? [2019-09-25 05:17:26.723273] D [MSGID: 0] >> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >> ??? mdsgv01 found >> ??? [2019-09-25 05:17:26.723326] D [MSGID: 0] >> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >> Returning 0 >> ??? [2019-09-25 05:17:26.723360] D [MSGID: 0] >> ??? [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: >> ??? Returning 0 >> >> ??? ==> /var/log/glusterfs/cmd_history.log <== >> ??? [2019-09-25 05:17:26.723390]? : volume start mdsgv01 : FAILED : >> Commit >> ??? failed on localhost. Please check log file for details. >> >> ??? ==> /var/log/glusterfs/glusterd.log <== >> ??? [2019-09-25 05:17:26.723479] D [MSGID: 0] >> ??? [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: >> ??? Returning 0 >> >> >> >> ??? [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol >> ??? volume management >> ???? ? ? ?type mgmt/glusterd >> ???? ? ? ?option working-directory /var/lib/glusterd >> ???? ? ? ?option transport-type socket,rdma >> ???? ? ? ?option transport.socket.keepalive-time 10 >> ???? ? ? ?option transport.socket.keepalive-interval 2 >> ???? ? ? ?option transport.socket.read-fail-log off >> ???? ? ? ?option ping-timeout 0 >> ???? ? ? ?option event-threads 1 >> ???? ? ? ?option rpc-auth-allow-insecure on >> ???? ? ? ?# option cluster.server-quorum-type server >> ???? ? ? ?# option cluster.quorum-type auto >> ???? ? ? ?option server.event-threads 8 >> ???? ? ? ?option client.event-threads 8 >> ???? ? ? ?option performance.write-behind-window-size 8MB >> ???? ? ? ?option performance.io-thread-count 16 >> ???? ? ? ?option performance.cache-size 1GB >> ???? ? ? ?option nfs.trusted-sync on >> ???? ? ? ?option storage.owner-uid 36 >> ???? ? ? ?option storage.owner-uid 36 >> ???? ? ? ?option cluster.data-self-heal-algorithm full >> ???? ? ? ?option performance.low-prio-threads 32 >> ???? ? ? ?option features.shard-block-size 512MB >> ???? ? ? ?option features.shard on >> ??? end-volume >> ??? [root at mdskvm-p01 glusterfs]# >> >> >> ??? [root at mdskvm-p01 glusterfs]# gluster volume info >> >> ??? Volume Name: mdsgv01 >> ??? Type: Replicate >> ??? Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 >> ??? Status: Stopped >> ??? Snapshot Count: 0 >> ??? Number of Bricks: 1 x 2 = 2 >> ??? Transport-type: tcp >> ??? Bricks: >> ??? Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 >> ??? Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >> ??? Options Reconfigured: >> ??? storage.owner-gid: 36 >> ??? cluster.data-self-heal-algorithm: full >> ??? performance.low-prio-threads: 32 >> ??? features.shard-block-size: 512MB >> ??? features.shard: on >> ??? storage.owner-uid: 36 >> ??? cluster.server-quorum-type: none >> ??? cluster.quorum-type: none >> ??? server.event-threads: 8 >> ??? client.event-threads: 8 >> ??? performance.write-behind-window-size: 8MB >> ??? performance.io-thread-count: 16 >> ??? performance.cache-size: 1GB >> ??? nfs.trusted-sync: on >> ??? server.allow-insecure: on >> ??? performance.readdir-ahead: on >> ??? diagnostics.brick-log-level: DEBUG >> ??? diagnostics.brick-sys-log-level: INFO >> ??? diagnostics.client-log-level: DEBUG >> ??? [root at mdskvm-p01 glusterfs]# >> >> >> ??? _______________________________________________ >> >> ??? Community Meeting Calendar: >> >> ??? APAC Schedule - >> ??? Every 2nd and 4th Tuesday at 11:30 AM IST >> ??? Bridge: https://bluejeans.com/118564314 >> >> ??? NA/EMEA Schedule - >> ??? Every 1st and 3rd Tuesday at 01:00 PM EDT >> ??? Bridge: https://bluejeans.com/118564314 >> >> ??? Gluster-devel mailing list >> ??? Gluster-devel at gluster.org >> ??? https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> >> >> -- >> Thanks, >> Sanju > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > -- Thx, TK. -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterd-brick.tar.gz Type: application/x-gzip Size: 28548 bytes Desc: not available URL: From tomkcpr at mdevsys.com Wed Sep 25 11:05:00 2019 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 25 Sep 2019 07:05:00 -0400 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: References: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> Message-ID: <9e668a11-6d53-21cf-0b60-b4e4a99b4c56@mdevsys.com> Mind you, I just upgraded from 3.12 to 6.X. On 9/25/2019 6:56 AM, TomK wrote: > > > Brick log for specific gluster start command attempt (full log attached): > > [2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main] > 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5 > (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz --volfile-id > mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p > /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid > -S /var/run/gluster/defbdb699838d53b.socket --brick-name > /mnt/p01-d01/glusterv01 -l > /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option > *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 > --process-name brick --brick-port 49155 --xlator-option > mdsgv01-server.listen-port=49155) > [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] > 0-glusterfs: Pid of current running process is 23133 > [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] > 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9 > [2019-09-25 10:53:37.865940] I [MSGID: 101190] > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread > with index 0 > [2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] > 0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz > [2019-09-25 10:53:37.866043] I [MSGID: 101190] > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread > with index 1 > [2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] > 0-glusterfsd-mgmt: Exhausted all volfile servers > [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] > (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] > -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] > -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: > received signum (1), shutting down > [2019-09-25 10:53:37.872399] I > [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected > (priv->connected = 0) > [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] > 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program: > Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs) > [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] > (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] > -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] > -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: > received signum (1), shutting down > > > > > > On 9/25/2019 6:48 AM, TomK wrote: >> Attached. >> >> >> On 9/25/2019 5:08 AM, Sanju Rakonde wrote: >>> Hi, The below errors indicate that brick process is failed to start. >>> Please attach brick log. >>> >>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a >>> fresh brick process for brick /mnt/p01-d01/glusterv01 >>> [2019-09-25 05:17:26.722717] E [MSGID: 106005] >>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to >>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>> [2019-09-25 05:17:26.722960] D [MSGID: 0] >>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning >>> -107 >>> [2019-09-25 05:17:26.723006] E [MSGID: 106122] >>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start >>> commit failed. >>> >>> On Wed, Sep 25, 2019 at 11:00 AM TomK >> > wrote: >>> >>> ??? Hey All, >>> >>> ??? I'm getting the below error when trying to start a 2 node Gluster >>> ??? cluster. >>> >>> ??? I had the quorum enabled when I was at version 3.12 .? However with >>> ??? this >>> ??? version it needed the quorum disabled.? So I did so however now >>> see the >>> ??? subject error. >>> >>> ??? Any ideas what I could try next? >>> >>> ??? -- ??? Thx, >>> ??? TK. >>> >>> >>> ??? [2019-09-25 05:17:26.615203] D [MSGID: 0] >>> ??? [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: >>> Returning 0 >>> ??? [2019-09-25 05:17:26.615555] D [MSGID: 0] >>> ??? [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP >>> = 5. >>> ??? Returning 0 >>> ??? [2019-09-25 05:17:26.616271] D [MSGID: 0] >>> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >>> ??? mdsgv01 found >>> ??? [2019-09-25 05:17:26.616305] D [MSGID: 0] >>> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >>> Returning 0 >>> ??? [2019-09-25 05:17:26.616327] D [MSGID: 0] >>> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: >>> returning 0 >>> ??? [2019-09-25 05:17:26.617056] I >>> ??? [glusterd-utils.c:6312:glusterd_brick_start] 0-management: >>> starting a >>> ??? fresh brick process for brick /mnt/p01-d01/glusterv01 >>> ??? [2019-09-25 05:17:26.722717] E [MSGID: 106005] >>> ??? [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to >>> ??? start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>> ??? [2019-09-25 05:17:26.722960] D [MSGID: 0] >>> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning >>> ??? -107 >>> ??? [2019-09-25 05:17:26.723006] E [MSGID: 106122] >>> ??? [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume >>> start >>> ??? commit failed. >>> ??? [2019-09-25 05:17:26.723027] D [MSGID: 0] >>> ??? [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. >>> ??? Returning -107 >>> ??? [2019-09-25 05:17:26.723045] E [MSGID: 106122] >>> ??? [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit >>> ??? failed for operation Start on local node >>> ??? [2019-09-25 05:17:26.723073] D [MSGID: 0] >>> ??? [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: >>> op_ctx >>> ??? modification not required >>> ??? [2019-09-25 05:17:26.723141] E [MSGID: 106122] >>> ??? [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] >>> ??? 0-management: Commit Op Failed >>> ??? [2019-09-25 05:17:26.723204] D [MSGID: 0] >>> ??? [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: >>> Trying to >>> ??? release lock of vol mdsgv01 for >>> f7336db6-22b4-497d-8c2f-04c833a28546 as >>> ??? mdsgv01_vol >>> ??? [2019-09-25 05:17:26.723239] D [MSGID: 0] >>> ??? [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock >>> for >>> ??? vol mdsgv01 successfully released >>> ??? [2019-09-25 05:17:26.723273] D [MSGID: 0] >>> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >>> ??? mdsgv01 found >>> ??? [2019-09-25 05:17:26.723326] D [MSGID: 0] >>> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >>> Returning 0 >>> ??? [2019-09-25 05:17:26.723360] D [MSGID: 0] >>> ??? [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] >>> 0-management: >>> ??? Returning 0 >>> >>> ??? ==> /var/log/glusterfs/cmd_history.log <== >>> ??? [2019-09-25 05:17:26.723390]? : volume start mdsgv01 : FAILED : >>> Commit >>> ??? failed on localhost. Please check log file for details. >>> >>> ??? ==> /var/log/glusterfs/glusterd.log <== >>> ??? [2019-09-25 05:17:26.723479] D [MSGID: 0] >>> ??? [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: >>> ??? Returning 0 >>> >>> >>> >>> ??? [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol >>> ??? volume management >>> ???? ? ? ?type mgmt/glusterd >>> ???? ? ? ?option working-directory /var/lib/glusterd >>> ???? ? ? ?option transport-type socket,rdma >>> ???? ? ? ?option transport.socket.keepalive-time 10 >>> ???? ? ? ?option transport.socket.keepalive-interval 2 >>> ???? ? ? ?option transport.socket.read-fail-log off >>> ???? ? ? ?option ping-timeout 0 >>> ???? ? ? ?option event-threads 1 >>> ???? ? ? ?option rpc-auth-allow-insecure on >>> ???? ? ? ?# option cluster.server-quorum-type server >>> ???? ? ? ?# option cluster.quorum-type auto >>> ???? ? ? ?option server.event-threads 8 >>> ???? ? ? ?option client.event-threads 8 >>> ???? ? ? ?option performance.write-behind-window-size 8MB >>> ???? ? ? ?option performance.io-thread-count 16 >>> ???? ? ? ?option performance.cache-size 1GB >>> ???? ? ? ?option nfs.trusted-sync on >>> ???? ? ? ?option storage.owner-uid 36 >>> ???? ? ? ?option storage.owner-uid 36 >>> ???? ? ? ?option cluster.data-self-heal-algorithm full >>> ???? ? ? ?option performance.low-prio-threads 32 >>> ???? ? ? ?option features.shard-block-size 512MB >>> ???? ? ? ?option features.shard on >>> ??? end-volume >>> ??? [root at mdskvm-p01 glusterfs]# >>> >>> >>> ??? [root at mdskvm-p01 glusterfs]# gluster volume info >>> >>> ??? Volume Name: mdsgv01 >>> ??? Type: Replicate >>> ??? Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 >>> ??? Status: Stopped >>> ??? Snapshot Count: 0 >>> ??? Number of Bricks: 1 x 2 = 2 >>> ??? Transport-type: tcp >>> ??? Bricks: >>> ??? Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 >>> ??? Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>> ??? Options Reconfigured: >>> ??? storage.owner-gid: 36 >>> ??? cluster.data-self-heal-algorithm: full >>> ??? performance.low-prio-threads: 32 >>> ??? features.shard-block-size: 512MB >>> ??? features.shard: on >>> ??? storage.owner-uid: 36 >>> ??? cluster.server-quorum-type: none >>> ??? cluster.quorum-type: none >>> ??? server.event-threads: 8 >>> ??? client.event-threads: 8 >>> ??? performance.write-behind-window-size: 8MB >>> ??? performance.io-thread-count: 16 >>> ??? performance.cache-size: 1GB >>> ??? nfs.trusted-sync: on >>> ??? server.allow-insecure: on >>> ??? performance.readdir-ahead: on >>> ??? diagnostics.brick-log-level: DEBUG >>> ??? diagnostics.brick-sys-log-level: INFO >>> ??? diagnostics.client-log-level: DEBUG >>> ??? [root at mdskvm-p01 glusterfs]# >>> >>> >>> ??? _______________________________________________ >>> >>> ??? Community Meeting Calendar: >>> >>> ??? APAC Schedule - >>> ??? Every 2nd and 4th Tuesday at 11:30 AM IST >>> ??? Bridge: https://bluejeans.com/118564314 >>> >>> ??? NA/EMEA Schedule - >>> ??? Every 1st and 3rd Tuesday at 01:00 PM EDT >>> ??? Bridge: https://bluejeans.com/118564314 >>> >>> ??? Gluster-devel mailing list >>> ??? Gluster-devel at gluster.org >>> ??? https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> >>> >>> -- >>> Thanks, >>> Sanju >> >> >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/118564314 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/118564314 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > -- Thx, TK. From tomkcpr at mdevsys.com Wed Sep 25 11:17:07 2019 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 25 Sep 2019 07:17:07 -0400 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: <9e668a11-6d53-21cf-0b60-b4e4a99b4c56@mdevsys.com> References: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> <9e668a11-6d53-21cf-0b60-b4e4a99b4c56@mdevsys.com> Message-ID: This issue looked nearly identical to: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 so tried: option transport.socket.listen-port 24007 And it worked: [root at mdskvm-p01 glusterfs]# systemctl stop glusterd [root at mdskvm-p01 glusterfs]# history|grep server-quorum 3149 gluster volume set mdsgv01 cluster.server-quorum-type none 3186 history|grep server-quorum [root at mdskvm-p01 glusterfs]# gluster volume set mdsgv01 transport.socket.listen-port 24007 Connection failed. Please check if gluster daemon is operational. [root at mdskvm-p01 glusterfs]# systemctl start glusterd [root at mdskvm-p01 glusterfs]# gluster volume set mdsgv01 transport.socket.listen-port 24007 volume set: failed: option : transport.socket.listen-port does not exist Did you mean transport.keepalive or ...listen-backlog? [root at mdskvm-p01 glusterfs]# [root at mdskvm-p01 glusterfs]# netstat -pnltu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 4562/libvirtd tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 24193/glusterd tcp 0 0 0.0.0.0:2223 0.0.0.0:* LISTEN 4277/sshd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:51760 0.0.0.0:* LISTEN 4479/rpc.statd tcp 0 0 0.0.0.0:54322 0.0.0.0:* LISTEN 13229/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 4279/sshd tcp6 0 0 :::54811 :::* LISTEN 4479/rpc.statd tcp6 0 0 :::16514 :::* LISTEN 4562/libvirtd tcp6 0 0 :::2223 :::* LISTEN 4277/sshd tcp6 0 0 :::111 :::* LISTEN 3357/rpcbind tcp6 0 0 :::54321 :::* LISTEN 13225/python2 tcp6 0 0 :::22 :::* LISTEN 4279/sshd udp 0 0 0.0.0.0:24009 0.0.0.0:* 4281/python2 udp 0 0 0.0.0.0:38873 0.0.0.0:* 4479/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 1/systemd udp 0 0 127.0.0.1:323 0.0.0.0:* 3361/chronyd udp 0 0 127.0.0.1:839 0.0.0.0:* 4479/rpc.statd udp 0 0 0.0.0.0:935 0.0.0.0:* 3357/rpcbind udp6 0 0 :::46947 :::* 4479/rpc.statd udp6 0 0 :::111 :::* 3357/rpcbind udp6 0 0 ::1:323 :::* 3361/chronyd udp6 0 0 :::935 :::* 3357/rpcbind [root at mdskvm-p01 glusterfs]# gluster volume start mdsgv01 volume start: mdsgv01: success [root at mdskvm-p01 glusterfs]# gluster volume info Volume Name: mdsgv01 Type: Replicate Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 Options Reconfigured: storage.owner-gid: 36 cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-uid: 36 cluster.server-quorum-type: none cluster.quorum-type: none server.event-threads: 8 client.event-threads: 8 performance.write-behind-window-size: 8MB performance.io-thread-count: 16 performance.cache-size: 1GB nfs.trusted-sync: on server.allow-insecure: on performance.readdir-ahead: on diagnostics.brick-log-level: DEBUG diagnostics.brick-sys-log-level: INFO diagnostics.client-log-level: DEBUG [root at mdskvm-p01 glusterfs]# gluster volume status Status of volume: mdsgv01 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g lusterv01 49152 0 Y 24487 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 24515 Task Status of Volume mdsgv01 ------------------------------------------------------------------------------ There are no active volume tasks [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off option ping-timeout 0 option event-threads 1 option rpc-auth-allow-insecure on option cluster.server-quorum-type none option cluster.quorum-type none # option cluster.server-quorum-type server # option cluster.quorum-type auto option server.event-threads 8 option client.event-threads 8 option performance.write-behind-window-size 8MB option performance.io-thread-count 16 option performance.cache-size 1GB option nfs.trusted-sync on option storage.owner-uid 36 option storage.owner-uid 36 option cluster.data-self-heal-algorithm full option performance.low-prio-threads 32 option features.shard-block-size 512MB option features.shard on option transport.socket.listen-port 24007 end-volume [root at mdskvm-p01 glusterfs]# Cheers, TK On 9/25/2019 7:05 AM, TomK wrote: > Mind you, I just upgraded from 3.12 to 6.X. > > On 9/25/2019 6:56 AM, TomK wrote: >> >> >> Brick log for specific gluster start command attempt (full log attached): >> >> [2019-09-25 10:53:37.847426] I [MSGID: 100030] >> [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running >> /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s >> mdskvm-p01.nix.mds.xyz --volfile-id >> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p >> /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid >> -S /var/run/gluster/defbdb699838d53b.socket --brick-name >> /mnt/p01-d01/glusterv01 -l >> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option >> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 >> --process-name brick --brick-port 49155 --xlator-option >> mdsgv01-server.listen-port=49155) >> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] >> 0-glusterfs: Pid of current running process is 23133 >> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] >> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9 >> [2019-09-25 10:53:37.865940] I [MSGID: 101190] >> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started >> thread with index 0 >> [2019-09-25 10:53:37.866054] I >> [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: >> disconnected from remote-host: mdskvm-p01.nix.mds.xyz >> [2019-09-25 10:53:37.866043] I [MSGID: 101190] >> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started >> thread with index 1 >> [2019-09-25 10:53:37.866083] I >> [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted >> all volfile servers >> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] >> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] >> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: >> received signum (1), shutting down >> [2019-09-25 10:53:37.872399] I >> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected >> (priv->connected = 0) >> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] >> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 >> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport >> (glusterfs) >> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] >> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] >> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: >> received signum (1), shutting down >> >> >> >> >> >> On 9/25/2019 6:48 AM, TomK wrote: >>> Attached. >>> >>> >>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote: >>>> Hi, The below errors indicate that brick process is failed to start. >>>> Please attach brick log. >>>> >>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a >>>> fresh brick process for brick /mnt/p01-d01/glusterv01 >>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005] >>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to >>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>>> [2019-09-25 05:17:26.722960] D [MSGID: 0] >>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning >>>> -107 >>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122] >>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start >>>> commit failed. >>>> >>>> On Wed, Sep 25, 2019 at 11:00 AM TomK >>> > wrote: >>>> >>>> ??? Hey All, >>>> >>>> ??? I'm getting the below error when trying to start a 2 node Gluster >>>> ??? cluster. >>>> >>>> ??? I had the quorum enabled when I was at version 3.12 .? However with >>>> ??? this >>>> ??? version it needed the quorum disabled.? So I did so however now >>>> see the >>>> ??? subject error. >>>> >>>> ??? Any ideas what I could try next? >>>> >>>> ??? -- ??? Thx, >>>> ??? TK. >>>> >>>> >>>> ??? [2019-09-25 05:17:26.615203] D [MSGID: 0] >>>> ??? [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: >>>> Returning 0 >>>> ??? [2019-09-25 05:17:26.615555] D [MSGID: 0] >>>> ??? [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: >>>> OP = 5. >>>> ??? Returning 0 >>>> ??? [2019-09-25 05:17:26.616271] D [MSGID: 0] >>>> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >>>> ??? mdsgv01 found >>>> ??? [2019-09-25 05:17:26.616305] D [MSGID: 0] >>>> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >>>> Returning 0 >>>> ??? [2019-09-25 05:17:26.616327] D [MSGID: 0] >>>> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: >>>> returning 0 >>>> ??? [2019-09-25 05:17:26.617056] I >>>> ??? [glusterd-utils.c:6312:glusterd_brick_start] 0-management: >>>> starting a >>>> ??? fresh brick process for brick /mnt/p01-d01/glusterv01 >>>> ??? [2019-09-25 05:17:26.722717] E [MSGID: 106005] >>>> ??? [glusterd-utils.c:6317:glusterd_brick_start] 0-management: >>>> Unable to >>>> ??? start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>>> ??? [2019-09-25 05:17:26.722960] D [MSGID: 0] >>>> ??? [glusterd-utils.c:6327:glusterd_brick_start] 0-management: >>>> returning >>>> ??? -107 >>>> ??? [2019-09-25 05:17:26.723006] E [MSGID: 106122] >>>> ??? [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume >>>> start >>>> ??? commit failed. >>>> ??? [2019-09-25 05:17:26.723027] D [MSGID: 0] >>>> ??? [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. >>>> ??? Returning -107 >>>> ??? [2019-09-25 05:17:26.723045] E [MSGID: 106122] >>>> ??? [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit >>>> ??? failed for operation Start on local node >>>> ??? [2019-09-25 05:17:26.723073] D [MSGID: 0] >>>> ??? [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: >>>> op_ctx >>>> ??? modification not required >>>> ??? [2019-09-25 05:17:26.723141] E [MSGID: 106122] >>>> ??? [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] >>>> ??? 0-management: Commit Op Failed >>>> ??? [2019-09-25 05:17:26.723204] D [MSGID: 0] >>>> ??? [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: >>>> Trying to >>>> ??? release lock of vol mdsgv01 for >>>> f7336db6-22b4-497d-8c2f-04c833a28546 as >>>> ??? mdsgv01_vol >>>> ??? [2019-09-25 05:17:26.723239] D [MSGID: 0] >>>> ??? [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: >>>> Lock for >>>> ??? vol mdsgv01 successfully released >>>> ??? [2019-09-25 05:17:26.723273] D [MSGID: 0] >>>> ??? [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume >>>> ??? mdsgv01 found >>>> ??? [2019-09-25 05:17:26.723326] D [MSGID: 0] >>>> ??? [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: >>>> Returning 0 >>>> ??? [2019-09-25 05:17:26.723360] D [MSGID: 0] >>>> ??? [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] >>>> 0-management: >>>> ??? Returning 0 >>>> >>>> ??? ==> /var/log/glusterfs/cmd_history.log <== >>>> ??? [2019-09-25 05:17:26.723390]? : volume start mdsgv01 : FAILED : >>>> Commit >>>> ??? failed on localhost. Please check log file for details. >>>> >>>> ??? ==> /var/log/glusterfs/glusterd.log <== >>>> ??? [2019-09-25 05:17:26.723479] D [MSGID: 0] >>>> ??? [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] >>>> 0-management: >>>> ??? Returning 0 >>>> >>>> >>>> >>>> ??? [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol >>>> ??? volume management >>>> ???? ? ? ?type mgmt/glusterd >>>> ???? ? ? ?option working-directory /var/lib/glusterd >>>> ???? ? ? ?option transport-type socket,rdma >>>> ???? ? ? ?option transport.socket.keepalive-time 10 >>>> ???? ? ? ?option transport.socket.keepalive-interval 2 >>>> ???? ? ? ?option transport.socket.read-fail-log off >>>> ???? ? ? ?option ping-timeout 0 >>>> ???? ? ? ?option event-threads 1 >>>> ???? ? ? ?option rpc-auth-allow-insecure on >>>> ???? ? ? ?# option cluster.server-quorum-type server >>>> ???? ? ? ?# option cluster.quorum-type auto >>>> ???? ? ? ?option server.event-threads 8 >>>> ???? ? ? ?option client.event-threads 8 >>>> ???? ? ? ?option performance.write-behind-window-size 8MB >>>> ???? ? ? ?option performance.io-thread-count 16 >>>> ???? ? ? ?option performance.cache-size 1GB >>>> ???? ? ? ?option nfs.trusted-sync on >>>> ???? ? ? ?option storage.owner-uid 36 >>>> ???? ? ? ?option storage.owner-uid 36 >>>> ???? ? ? ?option cluster.data-self-heal-algorithm full >>>> ???? ? ? ?option performance.low-prio-threads 32 >>>> ???? ? ? ?option features.shard-block-size 512MB >>>> ???? ? ? ?option features.shard on >>>> ??? end-volume >>>> ??? [root at mdskvm-p01 glusterfs]# >>>> >>>> >>>> ??? [root at mdskvm-p01 glusterfs]# gluster volume info >>>> >>>> ??? Volume Name: mdsgv01 >>>> ??? Type: Replicate >>>> ??? Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 >>>> ??? Status: Stopped >>>> ??? Snapshot Count: 0 >>>> ??? Number of Bricks: 1 x 2 = 2 >>>> ??? Transport-type: tcp >>>> ??? Bricks: >>>> ??? Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 >>>> ??? Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 >>>> ??? Options Reconfigured: >>>> ??? storage.owner-gid: 36 >>>> ??? cluster.data-self-heal-algorithm: full >>>> ??? performance.low-prio-threads: 32 >>>> ??? features.shard-block-size: 512MB >>>> ??? features.shard: on >>>> ??? storage.owner-uid: 36 >>>> ??? cluster.server-quorum-type: none >>>> ??? cluster.quorum-type: none >>>> ??? server.event-threads: 8 >>>> ??? client.event-threads: 8 >>>> ??? performance.write-behind-window-size: 8MB >>>> ??? performance.io-thread-count: 16 >>>> ??? performance.cache-size: 1GB >>>> ??? nfs.trusted-sync: on >>>> ??? server.allow-insecure: on >>>> ??? performance.readdir-ahead: on >>>> ??? diagnostics.brick-log-level: DEBUG >>>> ??? diagnostics.brick-sys-log-level: INFO >>>> ??? diagnostics.client-log-level: DEBUG >>>> ??? [root at mdskvm-p01 glusterfs]# >>>> >>>> >>>> ??? _______________________________________________ >>>> >>>> ??? Community Meeting Calendar: >>>> >>>> ??? APAC Schedule - >>>> ??? Every 2nd and 4th Tuesday at 11:30 AM IST >>>> ??? Bridge: https://bluejeans.com/118564314 >>>> >>>> ??? NA/EMEA Schedule - >>>> ??? Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> ??? Bridge: https://bluejeans.com/118564314 >>>> >>>> ??? Gluster-devel mailing list >>>> ??? Gluster-devel at gluster.org >>>> ??? https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Sanju >>> >>> >>> >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/118564314 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/118564314 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >> >> >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/118564314 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/118564314 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> > > -- Thx, TK. From srakonde at redhat.com Wed Sep 25 12:47:28 2019 From: srakonde at redhat.com (Sanju Rakonde) Date: Wed, 25 Sep 2019 18:17:28 +0530 Subject: [Gluster-devel] 0-management: Commit failed for operation Start on local node In-Reply-To: References: <34a797ee-f0f1-4b49-452b-1d4b6166c11b@mdevsys.com> <9e668a11-6d53-21cf-0b60-b4e4a99b4c56@mdevsys.com> Message-ID: Great that you have managed to figure out the issue. On Wed, Sep 25, 2019 at 4:47 PM TomK wrote: > > This issue looked nearly identical to: > > https://bugzilla.redhat.com/show_bug.cgi?id=1702316 > > so tried: > > option transport.socket.listen-port 24007 > > And it worked: > > [root at mdskvm-p01 glusterfs]# systemctl stop glusterd > [root at mdskvm-p01 glusterfs]# history|grep server-quorum > 3149 gluster volume set mdsgv01 cluster.server-quorum-type none > 3186 history|grep server-quorum > [root at mdskvm-p01 glusterfs]# gluster volume set mdsgv01 > transport.socket.listen-port 24007 > Connection failed. Please check if gluster daemon is operational. > [root at mdskvm-p01 glusterfs]# systemctl start glusterd > [root at mdskvm-p01 glusterfs]# gluster volume set mdsgv01 > transport.socket.listen-port 24007 > volume set: failed: option : transport.socket.listen-port does not exist > Did you mean transport.keepalive or ...listen-backlog? > [root at mdskvm-p01 glusterfs]# > [root at mdskvm-p01 glusterfs]# netstat -pnltu > Active Internet connections (only servers) > Proto Recv-Q Send-Q Local Address Foreign Address > State PID/Program name > tcp 0 0 0.0.0.0:16514 0.0.0.0:* > LISTEN 4562/libvirtd > tcp 0 0 0.0.0.0:24007 0.0.0.0:* > LISTEN 24193/glusterd > tcp 0 0 0.0.0.0:2223 0.0.0.0:* > LISTEN 4277/sshd > tcp 0 0 0.0.0.0:111 0.0.0.0:* > LISTEN 1/systemd > tcp 0 0 0.0.0.0:51760 0.0.0.0:* > LISTEN 4479/rpc.statd > tcp 0 0 0.0.0.0:54322 0.0.0.0:* > LISTEN 13229/python > tcp 0 0 0.0.0.0:22 0.0.0.0:* > LISTEN 4279/sshd > tcp6 0 0 :::54811 :::* > LISTEN 4479/rpc.statd > tcp6 0 0 :::16514 :::* > LISTEN 4562/libvirtd > tcp6 0 0 :::2223 :::* > LISTEN 4277/sshd > tcp6 0 0 :::111 :::* > LISTEN 3357/rpcbind > tcp6 0 0 :::54321 :::* > LISTEN 13225/python2 > tcp6 0 0 :::22 :::* > LISTEN 4279/sshd > udp 0 0 0.0.0.0:24009 0.0.0.0:* > 4281/python2 > udp 0 0 0.0.0.0:38873 0.0.0.0:* > 4479/rpc.statd > udp 0 0 0.0.0.0:111 0.0.0.0:* > 1/systemd > udp 0 0 127.0.0.1:323 0.0.0.0:* > 3361/chronyd > udp 0 0 127.0.0.1:839 0.0.0.0:* > 4479/rpc.statd > udp 0 0 0.0.0.0:935 0.0.0.0:* > 3357/rpcbind > udp6 0 0 :::46947 :::* > 4479/rpc.statd > udp6 0 0 :::111 :::* > 3357/rpcbind > udp6 0 0 ::1:323 :::* > 3361/chronyd > udp6 0 0 :::935 :::* > 3357/rpcbind > [root at mdskvm-p01 glusterfs]# gluster volume start mdsgv01 > volume start: mdsgv01: success > [root at mdskvm-p01 glusterfs]# gluster volume info > > Volume Name: mdsgv01 > Type: Replicate > Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 > Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > Options Reconfigured: > storage.owner-gid: 36 > cluster.data-self-heal-algorithm: full > performance.low-prio-threads: 32 > features.shard-block-size: 512MB > features.shard: on > storage.owner-uid: 36 > cluster.server-quorum-type: none > cluster.quorum-type: none > server.event-threads: 8 > client.event-threads: 8 > performance.write-behind-window-size: 8MB > performance.io-thread-count: 16 > performance.cache-size: 1GB > nfs.trusted-sync: on > server.allow-insecure: on > performance.readdir-ahead: on > diagnostics.brick-log-level: DEBUG > diagnostics.brick-sys-log-level: INFO > diagnostics.client-log-level: DEBUG > [root at mdskvm-p01 glusterfs]# gluster volume status > Status of volume: mdsgv01 > Gluster process TCP Port RDMA Port Online > Pid > > ------------------------------------------------------------------------------ > Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g > lusterv01 49152 0 Y > 24487 > NFS Server on localhost N/A N/A N > N/A > Self-heal Daemon on localhost N/A N/A Y > 24515 > > Task Status of Volume mdsgv01 > > ------------------------------------------------------------------------------ > There are no active volume tasks > > [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol > volume management > type mgmt/glusterd > option working-directory /var/lib/glusterd > option transport-type socket,rdma > option transport.socket.keepalive-time 10 > option transport.socket.keepalive-interval 2 > option transport.socket.read-fail-log off > option ping-timeout 0 > option event-threads 1 > option rpc-auth-allow-insecure on > option cluster.server-quorum-type none > option cluster.quorum-type none > # option cluster.server-quorum-type server > # option cluster.quorum-type auto > option server.event-threads 8 > option client.event-threads 8 > option performance.write-behind-window-size 8MB > option performance.io-thread-count 16 > option performance.cache-size 1GB > option nfs.trusted-sync on > option storage.owner-uid 36 > option storage.owner-uid 36 > option cluster.data-self-heal-algorithm full > option performance.low-prio-threads 32 > option features.shard-block-size 512MB > option features.shard on > option transport.socket.listen-port 24007 > end-volume > [root at mdskvm-p01 glusterfs]# > > > Cheers, > TK > > > On 9/25/2019 7:05 AM, TomK wrote: > > Mind you, I just upgraded from 3.12 to 6.X. > > > > On 9/25/2019 6:56 AM, TomK wrote: > >> > >> > >> Brick log for specific gluster start command attempt (full log > attached): > >> > >> [2019-09-25 10:53:37.847426] I [MSGID: 100030] > >> [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running > >> /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s > >> mdskvm-p01.nix.mds.xyz --volfile-id > >> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p > >> > /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid > > >> -S /var/run/gluster/defbdb699838d53b.socket --brick-name > >> /mnt/p01-d01/glusterv01 -l > >> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option > >> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 > >> --process-name brick --brick-port 49155 --xlator-option > >> mdsgv01-server.listen-port=49155) > >> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] > >> 0-glusterfs: Pid of current running process is 23133 > >> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] > >> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9 > >> [2019-09-25 10:53:37.865940] I [MSGID: 101190] > >> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started > >> thread with index 0 > >> [2019-09-25 10:53:37.866054] I > >> [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: > >> disconnected from remote-host: mdskvm-p01.nix.mds.xyz > >> [2019-09-25 10:53:37.866043] I [MSGID: 101190] > >> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started > >> thread with index 1 > >> [2019-09-25 10:53:37.866083] I > >> [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted > >> all volfile servers > >> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] > >> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] > >> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] > >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: > >> received signum (1), shutting down > >> [2019-09-25 10:53:37.872399] I > >> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected > >> (priv->connected = 0) > >> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] > >> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 > >> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport > >> (glusterfs) > >> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] > >> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] > >> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] > >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: > >> received signum (1), shutting down > >> > >> > >> > >> > >> > >> On 9/25/2019 6:48 AM, TomK wrote: > >>> Attached. > >>> > >>> > >>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote: > >>>> Hi, The below errors indicate that brick process is failed to start. > >>>> Please attach brick log. > >>>> > >>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a > >>>> fresh brick process for brick /mnt/p01-d01/glusterv01 > >>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005] > >>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to > >>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > >>>> [2019-09-25 05:17:26.722960] D [MSGID: 0] > >>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning > >>>> -107 > >>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122] > >>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start > >>>> commit failed. > >>>> > >>>> On Wed, Sep 25, 2019 at 11:00 AM TomK >>>> > wrote: > >>>> > >>>> Hey All, > >>>> > >>>> I'm getting the below error when trying to start a 2 node Gluster > >>>> cluster. > >>>> > >>>> I had the quorum enabled when I was at version 3.12 . However > with > >>>> this > >>>> version it needed the quorum disabled. So I did so however now > >>>> see the > >>>> subject error. > >>>> > >>>> Any ideas what I could try next? > >>>> > >>>> -- Thx, > >>>> TK. > >>>> > >>>> > >>>> [2019-09-25 05:17:26.615203] D [MSGID: 0] > >>>> [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: > >>>> Returning 0 > >>>> [2019-09-25 05:17:26.615555] D [MSGID: 0] > >>>> [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: > >>>> OP = 5. > >>>> Returning 0 > >>>> [2019-09-25 05:17:26.616271] D [MSGID: 0] > >>>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > >>>> mdsgv01 found > >>>> [2019-09-25 05:17:26.616305] D [MSGID: 0] > >>>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: > >>>> Returning 0 > >>>> [2019-09-25 05:17:26.616327] D [MSGID: 0] > >>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: > >>>> returning 0 > >>>> [2019-09-25 05:17:26.617056] I > >>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: > >>>> starting a > >>>> fresh brick process for brick /mnt/p01-d01/glusterv01 > >>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005] > >>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: > >>>> Unable to > >>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > >>>> [2019-09-25 05:17:26.722960] D [MSGID: 0] > >>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: > >>>> returning > >>>> -107 > >>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122] > >>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume > >>>> start > >>>> commit failed. > >>>> [2019-09-25 05:17:26.723027] D [MSGID: 0] > >>>> [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5. > >>>> Returning -107 > >>>> [2019-09-25 05:17:26.723045] E [MSGID: 106122] > >>>> [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: > Commit > >>>> failed for operation Start on local node > >>>> [2019-09-25 05:17:26.723073] D [MSGID: 0] > >>>> [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: > >>>> op_ctx > >>>> modification not required > >>>> [2019-09-25 05:17:26.723141] E [MSGID: 106122] > >>>> [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases] > >>>> 0-management: Commit Op Failed > >>>> [2019-09-25 05:17:26.723204] D [MSGID: 0] > >>>> [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: > >>>> Trying to > >>>> release lock of vol mdsgv01 for > >>>> f7336db6-22b4-497d-8c2f-04c833a28546 as > >>>> mdsgv01_vol > >>>> [2019-09-25 05:17:26.723239] D [MSGID: 0] > >>>> [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: > >>>> Lock for > >>>> vol mdsgv01 successfully released > >>>> [2019-09-25 05:17:26.723273] D [MSGID: 0] > >>>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume > >>>> mdsgv01 found > >>>> [2019-09-25 05:17:26.723326] D [MSGID: 0] > >>>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: > >>>> Returning 0 > >>>> [2019-09-25 05:17:26.723360] D [MSGID: 0] > >>>> [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] > >>>> 0-management: > >>>> Returning 0 > >>>> > >>>> ==> /var/log/glusterfs/cmd_history.log <== > >>>> [2019-09-25 05:17:26.723390] : volume start mdsgv01 : FAILED : > >>>> Commit > >>>> failed on localhost. Please check log file for details. > >>>> > >>>> ==> /var/log/glusterfs/glusterd.log <== > >>>> [2019-09-25 05:17:26.723479] D [MSGID: 0] > >>>> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] > >>>> 0-management: > >>>> Returning 0 > >>>> > >>>> > >>>> > >>>> [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol > >>>> volume management > >>>> type mgmt/glusterd > >>>> option working-directory /var/lib/glusterd > >>>> option transport-type socket,rdma > >>>> option transport.socket.keepalive-time 10 > >>>> option transport.socket.keepalive-interval 2 > >>>> option transport.socket.read-fail-log off > >>>> option ping-timeout 0 > >>>> option event-threads 1 > >>>> option rpc-auth-allow-insecure on > >>>> # option cluster.server-quorum-type server > >>>> # option cluster.quorum-type auto > >>>> option server.event-threads 8 > >>>> option client.event-threads 8 > >>>> option performance.write-behind-window-size 8MB > >>>> option performance.io-thread-count 16 > >>>> option performance.cache-size 1GB > >>>> option nfs.trusted-sync on > >>>> option storage.owner-uid 36 > >>>> option storage.owner-uid 36 > >>>> option cluster.data-self-heal-algorithm full > >>>> option performance.low-prio-threads 32 > >>>> option features.shard-block-size 512MB > >>>> option features.shard on > >>>> end-volume > >>>> [root at mdskvm-p01 glusterfs]# > >>>> > >>>> > >>>> [root at mdskvm-p01 glusterfs]# gluster volume info > >>>> > >>>> Volume Name: mdsgv01 > >>>> Type: Replicate > >>>> Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 > >>>> Status: Stopped > >>>> Snapshot Count: 0 > >>>> Number of Bricks: 1 x 2 = 2 > >>>> Transport-type: tcp > >>>> Bricks: > >>>> Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 > >>>> Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 > >>>> Options Reconfigured: > >>>> storage.owner-gid: 36 > >>>> cluster.data-self-heal-algorithm: full > >>>> performance.low-prio-threads: 32 > >>>> features.shard-block-size: 512MB > >>>> features.shard: on > >>>> storage.owner-uid: 36 > >>>> cluster.server-quorum-type: none > >>>> cluster.quorum-type: none > >>>> server.event-threads: 8 > >>>> client.event-threads: 8 > >>>> performance.write-behind-window-size: 8MB > >>>> performance.io-thread-count: 16 > >>>> performance.cache-size: 1GB > >>>> nfs.trusted-sync: on > >>>> server.allow-insecure: on > >>>> performance.readdir-ahead: on > >>>> diagnostics.brick-log-level: DEBUG > >>>> diagnostics.brick-sys-log-level: INFO > >>>> diagnostics.client-log-level: DEBUG > >>>> [root at mdskvm-p01 glusterfs]# > >>>> > >>>> > >>>> _______________________________________________ > >>>> > >>>> Community Meeting Calendar: > >>>> > >>>> APAC Schedule - > >>>> Every 2nd and 4th Tuesday at 11:30 AM IST > >>>> Bridge: https://bluejeans.com/118564314 > >>>> > >>>> NA/EMEA Schedule - > >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT > >>>> Bridge: https://bluejeans.com/118564314 > >>>> > >>>> Gluster-devel mailing list > >>>> Gluster-devel at gluster.org > >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel > >>>> > >>>> > >>>> > >>>> -- > >>>> Thanks, > >>>> Sanju > >>> > >>> > >>> > >>> _______________________________________________ > >>> > >>> Community Meeting Calendar: > >>> > >>> APAC Schedule - > >>> Every 2nd and 4th Tuesday at 11:30 AM IST > >>> Bridge: https://bluejeans.com/118564314 > >>> > >>> NA/EMEA Schedule - > >>> Every 1st and 3rd Tuesday at 01:00 PM EDT > >>> Bridge: https://bluejeans.com/118564314 > >>> > >>> Gluster-devel mailing list > >>> Gluster-devel at gluster.org > >>> https://lists.gluster.org/mailman/listinfo/gluster-devel > >>> > >> > >> > >> > >> _______________________________________________ > >> > >> Community Meeting Calendar: > >> > >> APAC Schedule - > >> Every 2nd and 4th Tuesday at 11:30 AM IST > >> Bridge: https://bluejeans.com/118564314 > >> > >> NA/EMEA Schedule - > >> Every 1st and 3rd Tuesday at 01:00 PM EDT > >> Bridge: https://bluejeans.com/118564314 > >> > >> Gluster-devel mailing list > >> Gluster-devel at gluster.org > >> https://lists.gluster.org/mailman/listinfo/gluster-devel > >> > > > > > > > -- > Thx, > TK. > -- Thanks, Sanju -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunkumar at redhat.com Thu Sep 26 05:19:12 2019 From: sunkumar at redhat.com (Sunny Kumar) Date: Thu, 26 Sep 2019 10:49:12 +0530 Subject: [Gluster-devel] Fwd: Gluster meetup: India In-Reply-To: References: <20190924091933.GA28991@ndevos-x270.lan.nixpanic.net> Message-ID: ---------- Forwarded message --------- From: Sunny Kumar Date: Thu, Sep 26, 2019 at 10:09 AM Subject: Re: Gluster meetup: India To: Cc: A mailing list of Red Hat associates involved in development, testing and production of RHGS , storage-eng , RHS team in Bangalore Thanks everyone for making this meetup successful! It was a great event and total participation was 40+. People who joined meetup in person had a chance to grab gluster goodies. We started meetup with Atin's talk on current status of gluster. In second session Yaniv talked about possible candidates of features landing in Gluster X. 3rd session where Aravinda and Amar talked about KaDalu (Rethinking gluster management) followed by a demo by Aravinda on Gluster Dashboard experiment. We concluded meetup with discussion around user story and migration from gerrit to github. I found all session interesting and informative. Please find recording of session here[1]. It is divided into 4 separate chapters (sessions). [1].https://bluejeans.com/s/rotg2 /sunny On Tue, Sep 24, 2019 at 4:51 PM Sunny Kumar wrote: > > Hello folks, > > Please find final agenda for gluster meetup here[1] and bluejeans link[2]. > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/. > > [2]. https://bluejeans.com/2114306332. > > /sunny > > On Tue, Sep 24, 2019 at 2:49 PM Niels de Vos wrote: > > > > On Tue, Sep 24, 2019 at 01:32:04PM +0530, Sunny Kumar wrote: > > > Hello folks, > > > > > > For people who are not able to join meetup in person have good news as > > > I will be hosting this meeting on bluejeans and all session will be > > > recorded. > > > > > > Details: > > > > > > https://bluejeans.com/2114306332 > > > > That is great, thanks! Could you share the agenda with the time > > schedule? > > > > Niels > > > > > > > > > > > > > To join from a Red Hat Deskphone or Softphone, dial: 84336. > > > Join Meeting > > > (Join from computer or phone) > > > ________________________________ > > > > > > Connecting directly from a room system? > > > > > > 1.) Dial: 199.48.152.152 or bjn.vc > > > 2.) Enter Meeting ID: 2114306332 > > > > > > Just want to dial in on your phone? > > > > > > 1.) Dial one of the following numbers: > > > 408-915-6466 (US) > > > See all numbers > > > 2.) Enter Meeting ID: 2114306332 > > > 3.) Press # > > > > > > ________________________________ > > > Description: > > > GlusterFS Bangalore Meetup > > > > > > Agenda: > > > > > > *) Gluster- X Speaker: Yaniv Kaul > > > > > > *) Kadalu - k8s storage with Gluster- Speakers: Aravinda & Amar > > > > > > *) Discussion Q/A > > > ________________________________ > > > Want to test your video connection? > > > https://bluejeans.com/111 > > > > > > > > > > > > /sunny > > > On Mon, Sep 23, 2019 at 2:01 PM Sunny Kumar wrote: > > > > > > > > Hello folks, > > > > > > > > A gentle reminder! > > > > Please do RSVP, if planning to attained. > > > > > > > > /sunny > > > > > > > > On Wed, Sep 18, 2019 at 11:09 AM Sunny Kumar wrote: > > > > > > > > > > ---------- Forwarded message --------- > > > > > From: Sunny Kumar > > > > > Date: Wed, Aug 28, 2019 at 6:21 PM > > > > > Subject: Gluster meetup: India > > > > > To: gluster-users , Gluster Devel > > > > > > > > > > Cc: Yaniv Kaul , Atin Mukherjee > > > > > , Tumballi, Amar , > > > > > Udayakumar Chandrashekhar , Neha Kulkarni > > > > > > > > > > > > > > > > > > > > Hello folks, > > > > > > > > > > We are hosting Gluster meetup at our office (Redhat-BLR-IN) on 25th > > > > > September 2019. > > > > > > > > > > Please find the agenda and location detail here [1] and plan accordingly. > > > > > > > > > > The highlight of this event will be Gluster -X we will keep on > > > > > updating agenda with topics, so keep an eye on it. > > > > > > > > > > Note: > > > > > * RSVP as YES if attending, this will help us to organize the > > > > > facilities better. > > > > > > > > > > If you have any question, please reach out to me or comment on the > > > > > event page [1]. > > > > > > > > > > Feel free to share this meetup via other channels. > > > > > > > > > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/ > > > > > > > > > > > > > > > /sunny > > > > > > --- > > > Note: This list is intended for discussions relating to Red Hat Storage products, customers and/or support. Discussions on GlusterFS and Ceph architecture, design and engineering should go to relevant upstream mailing lists. From rkothiya at redhat.com Thu Sep 26 07:51:18 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Thu, 26 Sep 2019 13:21:18 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day Postponed In-Reply-To: References: Message-ID: Hi, As some of you wanted to know why have we postponed the test day and created rc2. So I am sharing the details below : * Some fixes which were part of release-6 had not made it to release7. * Also there were some blocker bugs which needed to be in release-7. The links to these bugs and fixes are given below : https://bugzilla.redhat.com/show_bug.cgi?id=1755212 https://bugzilla.redhat.com/show_bug.cgi?id=1755213 https://bugzilla.redhat.com/show_bug.cgi?id=1752429 https://review.gluster.org/#/c/glusterfs/+/23431/ https://review.gluster.org/#/c/glusterfs/+/23476/ https://review.gluster.org/#/c/glusterfs/+/23475/ Regards Rinku On Wed, Sep 25, 2019 at 12:45 PM Rinku Kothiya wrote: > Hi, > > As we are planning to do a RC2 for release-7 we would want to postpone the > test day event. Hence we wont be having test day on 26-Sep-2019. I will > keep you posted on the rescheduled date of test day. > > Regards > Rinku > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Sep 27 17:42:04 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 27 Sep 2019 20:42:04 +0300 Subject: [Gluster-devel] Fwd: Gluster meetup: India In-Reply-To: References: <20190924091933.GA28991@ndevos-x270.lan.nixpanic.net> Message-ID: Attached please find a PDF version of my presentation. Y. On Thu, Sep 26, 2019 at 8:20 AM Sunny Kumar wrote: > ---------- Forwarded message --------- > From: Sunny Kumar > Date: Thu, Sep 26, 2019 at 10:09 AM > Subject: Re: Gluster meetup: India > To: > Cc: A mailing list of Red Hat associates involved in development, > testing and production of RHGS , storage-eng > , RHS team in Bangalore > > > > Thanks everyone for making this meetup successful! It was a great > event and total participation was 40+. People who joined meetup in > person had a chance to grab gluster goodies. > > We started meetup with Atin's talk on current status of gluster. In > second session Yaniv talked about possible candidates of features > landing in Gluster X. 3rd session where Aravinda and Amar talked about > KaDalu (Rethinking gluster management) followed by a demo by Aravinda > on Gluster Dashboard experiment. > > We concluded meetup with discussion around user story and migration > from gerrit to github. I found all session interesting and > informative. > > Please find recording of session here[1]. It is divided into 4 > separate chapters (sessions). > > > [1].https://bluejeans.com/s/rotg2 > > /sunny > > > On Tue, Sep 24, 2019 at 4:51 PM Sunny Kumar wrote: > > > > Hello folks, > > > > Please find final agenda for gluster meetup here[1] and bluejeans > link[2]. > > > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/. > > > > [2]. https://bluejeans.com/2114306332. > > > > /sunny > > > > On Tue, Sep 24, 2019 at 2:49 PM Niels de Vos wrote: > > > > > > On Tue, Sep 24, 2019 at 01:32:04PM +0530, Sunny Kumar wrote: > > > > Hello folks, > > > > > > > > For people who are not able to join meetup in person have good news > as > > > > I will be hosting this meeting on bluejeans and all session will be > > > > recorded. > > > > > > > > Details: > > > > > > > > https://bluejeans.com/2114306332 > > > > > > That is great, thanks! Could you share the agenda with the time > > > schedule? > > > > > > Niels > > > > > > > > > > > > > > > > > > To join from a Red Hat Deskphone or Softphone, dial: 84336. > > > > Join Meeting > > > > (Join from computer or phone) > > > > ________________________________ > > > > > > > > Connecting directly from a room system? > > > > > > > > 1.) Dial: 199.48.152.152 or bjn.vc > > > > 2.) Enter Meeting ID: 2114306332 > > > > > > > > Just want to dial in on your phone? > > > > > > > > 1.) Dial one of the following numbers: > > > > 408-915-6466 (US) > > > > See all numbers > > > > 2.) Enter Meeting ID: 2114306332 > > > > 3.) Press # > > > > > > > > ________________________________ > > > > Description: > > > > GlusterFS Bangalore Meetup > > > > > > > > Agenda: > > > > > > > > *) Gluster- X Speaker: Yaniv Kaul > > > > > > > > *) Kadalu - k8s storage with Gluster- Speakers: Aravinda & Amar > > > > > > > > *) Discussion Q/A > > > > ________________________________ > > > > Want to test your video connection? > > > > https://bluejeans.com/111 > > > > > > > > > > > > > > > > /sunny > > > > On Mon, Sep 23, 2019 at 2:01 PM Sunny Kumar > wrote: > > > > > > > > > > Hello folks, > > > > > > > > > > A gentle reminder! > > > > > Please do RSVP, if planning to attained. > > > > > > > > > > /sunny > > > > > > > > > > On Wed, Sep 18, 2019 at 11:09 AM Sunny Kumar > wrote: > > > > > > > > > > > > ---------- Forwarded message --------- > > > > > > From: Sunny Kumar > > > > > > Date: Wed, Aug 28, 2019 at 6:21 PM > > > > > > Subject: Gluster meetup: India > > > > > > To: gluster-users , Gluster Devel > > > > > > > > > > > > Cc: Yaniv Kaul , Atin Mukherjee > > > > > > , Tumballi, Amar , > > > > > > Udayakumar Chandrashekhar , Neha Kulkarni > > > > > > > > > > > > > > > > > > > > > > > > Hello folks, > > > > > > > > > > > > We are hosting Gluster meetup at our office (Redhat-BLR-IN) on > 25th > > > > > > September 2019. > > > > > > > > > > > > Please find the agenda and location detail here [1] and plan > accordingly. > > > > > > > > > > > > The highlight of this event will be Gluster -X we will keep on > > > > > > updating agenda with topics, so keep an eye on it. > > > > > > > > > > > > Note: > > > > > > * RSVP as YES if attending, this will help us to organize the > > > > > > facilities better. > > > > > > > > > > > > If you have any question, please reach out to me or comment on > the > > > > > > event page [1]. > > > > > > > > > > > > Feel free to share this meetup via other channels. > > > > > > > > > > > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/ > > > > > > > > > > > > > > > > > > /sunny > > > > > > > > --- > > > > Note: This list is intended for discussions relating to Red Hat > Storage products, customers and/or support. Discussions on GlusterFS and > Ceph architecture, design and engineering should go to relevant upstream > mailing lists. > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Gluster X presentation.pdf Type: application/pdf Size: 801638 bytes Desc: not available URL: From avishwan at redhat.com Sat Sep 28 16:18:29 2019 From: avishwan at redhat.com (Aravinda Vishwanathapura Krishna Murthy) Date: Sat, 28 Sep 2019 21:48:29 +0530 Subject: [Gluster-devel] Fwd: Gluster meetup: India In-Reply-To: References: <20190924091933.GA28991@ndevos-x270.lan.nixpanic.net> Message-ID: Enclosed the slides of the following topics 1. Rethinking Gluster Management using K8s - KaDalu Kadalu is a new project to provide storage for Kubernetes by avoiding Glusterd/Glusterd2 layer and using the Kubernetes capabilities. This project started as a demo for Devconf India 2019 event but turns out as a simple solution. Feel free to open feature requests or issues here: https://github.com/kadalu/kadalu/issues Project repo is here: https://github.com/kadalu/kadalu 2. Gluster dashboard experiment This project designed for Phoenix Phrenzy contest() using the Elixir Phoenix's new feature called Liveview. This project is not yet complete and not ready for production yet. Comments and suggestions are welcome. Project repo is here: https://github.com/aravindavk/gluster-dashboard/ On Fri, Sep 27, 2019 at 11:13 PM Yaniv Kaul wrote: > Attached please find a PDF version of my presentation. > Y. > > On Thu, Sep 26, 2019 at 8:20 AM Sunny Kumar wrote: > >> ---------- Forwarded message --------- >> From: Sunny Kumar >> Date: Thu, Sep 26, 2019 at 10:09 AM >> Subject: Re: Gluster meetup: India >> To: >> Cc: A mailing list of Red Hat associates involved in development, >> testing and production of RHGS , storage-eng >> , RHS team in Bangalore >> >> >> >> Thanks everyone for making this meetup successful! It was a great >> event and total participation was 40+. People who joined meetup in >> person had a chance to grab gluster goodies. >> >> We started meetup with Atin's talk on current status of gluster. In >> second session Yaniv talked about possible candidates of features >> landing in Gluster X. 3rd session where Aravinda and Amar talked about >> KaDalu (Rethinking gluster management) followed by a demo by Aravinda >> on Gluster Dashboard experiment. >> >> We concluded meetup with discussion around user story and migration >> from gerrit to github. I found all session interesting and >> informative. >> >> Please find recording of session here[1]. It is divided into 4 >> separate chapters (sessions). >> >> >> [1].https://bluejeans.com/s/rotg2 >> >> /sunny >> >> >> On Tue, Sep 24, 2019 at 4:51 PM Sunny Kumar wrote: >> > >> > Hello folks, >> > >> > Please find final agenda for gluster meetup here[1] and bluejeans >> link[2]. >> > >> > [1]. https://www.meetup.com/glusterfs-India/events/264366771/. >> > >> > [2]. https://bluejeans.com/2114306332. >> > >> > /sunny >> > >> > On Tue, Sep 24, 2019 at 2:49 PM Niels de Vos wrote: >> > > >> > > On Tue, Sep 24, 2019 at 01:32:04PM +0530, Sunny Kumar wrote: >> > > > Hello folks, >> > > > >> > > > For people who are not able to join meetup in person have good news >> as >> > > > I will be hosting this meeting on bluejeans and all session will be >> > > > recorded. >> > > > >> > > > Details: >> > > > >> > > > https://bluejeans.com/2114306332 >> > > >> > > That is great, thanks! Could you share the agenda with the time >> > > schedule? >> > > >> > > Niels >> > > >> > > >> > > > >> > > > >> > > > To join from a Red Hat Deskphone or Softphone, dial: 84336. >> > > > Join Meeting >> > > > (Join from computer or phone) >> > > > ________________________________ >> > > > >> > > > Connecting directly from a room system? >> > > > >> > > > 1.) Dial: 199.48.152.152 or bjn.vc >> > > > 2.) Enter Meeting ID: 2114306332 >> > > > >> > > > Just want to dial in on your phone? >> > > > >> > > > 1.) Dial one of the following numbers: >> > > > 408-915-6466 (US) >> > > > See all numbers >> > > > 2.) Enter Meeting ID: 2114306332 >> > > > 3.) Press # >> > > > >> > > > ________________________________ >> > > > Description: >> > > > GlusterFS Bangalore Meetup >> > > > >> > > > Agenda: >> > > > >> > > > *) Gluster- X Speaker: Yaniv Kaul >> > > > >> > > > *) Kadalu - k8s storage with Gluster- Speakers: Aravinda & Amar >> > > > >> > > > *) Discussion Q/A >> > > > ________________________________ >> > > > Want to test your video connection? >> > > > https://bluejeans.com/111 >> > > > >> > > > >> > > > >> > > > /sunny >> > > > On Mon, Sep 23, 2019 at 2:01 PM Sunny Kumar >> wrote: >> > > > > >> > > > > Hello folks, >> > > > > >> > > > > A gentle reminder! >> > > > > Please do RSVP, if planning to attained. >> > > > > >> > > > > /sunny >> > > > > >> > > > > On Wed, Sep 18, 2019 at 11:09 AM Sunny Kumar >> wrote: >> > > > > > >> > > > > > ---------- Forwarded message --------- >> > > > > > From: Sunny Kumar >> > > > > > Date: Wed, Aug 28, 2019 at 6:21 PM >> > > > > > Subject: Gluster meetup: India >> > > > > > To: gluster-users , Gluster Devel >> > > > > > >> > > > > > Cc: Yaniv Kaul , Atin Mukherjee >> > > > > > , Tumballi, Amar , >> > > > > > Udayakumar Chandrashekhar , Neha Kulkarni >> > > > > > >> > > > > > >> > > > > > >> > > > > > Hello folks, >> > > > > > >> > > > > > We are hosting Gluster meetup at our office (Redhat-BLR-IN) on >> 25th >> > > > > > September 2019. >> > > > > > >> > > > > > Please find the agenda and location detail here [1] and plan >> accordingly. >> > > > > > >> > > > > > The highlight of this event will be Gluster -X we will keep on >> > > > > > updating agenda with topics, so keep an eye on it. >> > > > > > >> > > > > > Note: >> > > > > > * RSVP as YES if attending, this will help us to organize the >> > > > > > facilities better. >> > > > > > >> > > > > > If you have any question, please reach out to me or comment on >> the >> > > > > > event page [1]. >> > > > > > >> > > > > > Feel free to share this meetup via other channels. >> > > > > > >> > > > > > [1]. https://www.meetup.com/glusterfs-India/events/264366771/ >> > > > > > >> > > > > > >> > > > > > /sunny >> > > > >> > > > --- >> > > > Note: This list is intended for discussions relating to Red Hat >> Storage products, customers and/or support. Discussions on GlusterFS and >> Ceph architecture, design and engineering should go to relevant upstream >> mailing lists. >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/118564314 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/118564314 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/118564314 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/118564314 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- regards Aravinda VK -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Gluster Dashboard.pdf Type: application/pdf Size: 231061 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Rethinking Gluster Management using k8s - Gluster Meetup.pdf Type: application/pdf Size: 568183 bytes Desc: not available URL: From jenkins at build.gluster.org Mon Sep 30 01:45:04 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 30 Sep 2019 01:45:04 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <922430953.9.1569807904703.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1750265 / cli: configure name server in host will cause cli command hanging https://bugzilla.redhat.com/1753994 / core: Mtime is not updated on setting it to older date online when sharding enabled https://bugzilla.redhat.com/1749272 / disperse: The version of the file in the disperse volume created with different nodes is incorrect https://bugzilla.redhat.com/1747844 / distribute: Rebalance doesn't work correctly if performance.parallel-readdir on and with some other specific options set https://bugzilla.redhat.com/1751575 / encryption-xlator: File corruption in encrypted volume during read operation https://bugzilla.redhat.com/1756704 / glusterd: Peer Rejected (Connected) after instance recreation https://bugzilla.redhat.com/1754483 / glusterd: Peer wrongly shown as Connected. https://bugzilla.redhat.com/1755700 / project-infrastructure: 404 error : https://build.gluster.org/job/centos7-regression/7972/consoleFull https://bugzilla.redhat.com/1753587 / project-infrastructure: https://build.gluster.org/job/compare-bug-version-and-git-branch/41059/ fails for public BZ https://bugzilla.redhat.com/1756216 / project-infrastructure: Please migrate the "gluster-geosync" repo under github organization https://bugzilla.redhat.com/1754017 / project-infrastructure: request a user account for the blog on gluster.org https://bugzilla.redhat.com/1755418 / project-infrastructure: Unable to access review.gluster.org https://bugzilla.redhat.com/1755721 / project-infrastructure: Unable to start the release job https://bugzilla.redhat.com/1754517 / quota: Gluster 6.5 not listing all quotas https://bugzilla.redhat.com/1749625 / rpc: [GlusterFS 6.1] GlusterFS brick process crash https://bugzilla.redhat.com/1748205 / selfheal: null gfid entries can not be healed https://bugzilla.redhat.com/1753413 / selfheal: Self-heal daemon crashes https://bugzilla.redhat.com/1749369 / write-behind: Segmentation fault occurs while truncate file [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2368 bytes Desc: not available URL: