[Gluster-devel] Agenda for Community meeting today

Justin Clift justin at gluster.org
Wed Mar 5 16:50:54 UTC 2014


[done manually, as MeetingBot wasn't working at the time]

Meeting Summary
***************

  1. Agenda items from last week (15:04)

  2. build.gluster.org (15:21)

  3. Scaling jenkins infrastructure (15:28)
       a. jclift to get the Rackspace info and credentials from lpabon + johnmark (15:36)
       b. jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff (15:39)
       c. lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster (15:38)
       d. jclift will include lpabon in the jenkins testing stuff (15:47)

  4. 3.5.0 (15:41)
       a. msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already (15:45)

  5. 3.4.3 (15:46)

  6. Gluster 3.6 (15:59)

Meeting ended at 14:00 UTC - (full logs at the end of this email)

Agenda items we didn't complete will be addressed next meeting, Wed 12th March 2014.


Action items
************

  1. jclift to get the Rackspace info and credentials from lpabon + johnmark
  2. jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff
  3. lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster
  4. msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already
  5. jclift will include lpabon in the jenkins testing stuff


Action items, by person
***********************

  1. jclift
       1. jclift to get the Rackspace info and credentials from lpabon + johnmark
       2. jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff
       3. jclift will include lpabon in the jenkins testing stuff
  2. lalatenduM
       1. lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster
  3. msvbhat
       1. msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already


People present (lines said)
***************************

  * jclift (135)
  * kkeithley (32)
  * purpleidea (22)
  * ndevos (21)
  * msvbhat (16)
  * lalatenduM (8)
  * kshlm (4)
  * jdarcy (3)
  * doekia (2)
  * sas_ (1)
  * social (1)

Full log
********

15:03 < jclift> Meeting time!
15:03 -!- doekia [~doekia at sta21-h03-89-88-213-2.dsl.sta.abo.bbox.fr] has joined #gluster-meeting
15:03 < purpleidea> o hai
15:03 < doekia> hi
15:03  * lalatenduM here
15:03 < jclift> #startmeeting Gluster-Community-Meeting
15:03 -!- joherr [joherr at nat/redhat/x-hipolpolmcfeeftc] has joined #gluster-meeting
15:03 < msvbhat> Hello all
15:03  * kkeithley estamos aqui
15:03  * purpleidea yo tambien
15:03 < lalatenduM> kkeithley, :)
15:03 < jclift> :)
15:03 < jclift> :)
15:03  * sas_ says hi to all
15:03  * ndevos waves _o/
15:04 < jclift> Cool
15:04 < jclift> #topic Agenda items from last week
15:04 < jclift> (really hoping meeting bot is recognising my commands :>)
15:04 < ndevos> there was no meeting last week?
15:04 < jclift> Gah.  From the meeting before.
15:04 < jclift> eg 2 weks ago
15:04 -!- jdarcy [~jdarcy at pool-173-76-204-4.bstnma.fios.verizon.net] has joined #gluster-meeting
15:05 < jclift> jdarcy: Hiya.  We're just starting.
15:05 < jclift> So, items from last week.
15:05 < purpleidea> jdarcy: i think #startmeeting has to be on a line by itself or this didn't start (maybe)
15:05 < purpleidea> jclift:
15:05 < purpleidea> sorry jclift not jdarcy
15:05 < jclift> #startmeeting
15:05 < jclift> Hmmm
15:05 < jclift> 1 sec
15:05 < jclift> #endmeeting
15:05 < purpleidea> jclift: normally the bot says "meeting has started"
15:05 < kshlm> the meeting bots arent here
15:05 < jclift> #endmeeting
15:05 < purpleidea> jclift: normally the bot says "meeting has started"
15:05 < kshlm> the meeting bots arent here
15:05 < purpleidea> #endmeeting
15:05 < jclift> @#$@#$@$#
15:06 < jclift> k, I'll manually write up the notes.  Lets pretent they're here just to we know what's going on :)
15:06 < jclift> pretend
15:06 < jclift> So topic: Items from the last meeting
15:06 -!- abyss^ [~abyss at i-free.pl] has joined #gluster-meeting
15:06 < jclift> hagarth to consider new rpm packaging for 3.5
15:07 < jclift> "hagarth to start a thread on review of snapshot patch"
15:07 < kkeithley> do we know what's wrong with the current rpm packaging?
15:07 < jclift> This one I'm not sure about.  anyone know what's the status of that?
15:08 -!- dbruhn_ [~dbruhn at 66.202.139.30] has joined #gluster-meeting
15:08 < jclift> kkeithley: It's from when Niels noticed dependency problems about glusterfs-server requiring (I think python?)
15:09 < jclift> kkeithley: And it's also about splitting out glupy + rot13 into glusterfs-extra-xlators.  We're proceeding with the glusterfs-extra-xlators package, and Niels fixed the other packaging problem in the meantime
15:09 < ndevos> yeah, I think hagarth was not sure if the patches for glupy packaging in glusterfs-extra-xlators would be ready
15:09 < lalatenduM> jclift, we had discussion around the rpm pkging in the last meeting ..however i dont remember much abt it
15:09 -!- dbruhn [~dbruhn at 66.202.139.30] has quit [Ping timeout: 264 seconds]
15:09 < jclift> lalatenduM: Yeah. It's seems pretty much "done" now. ;)
15:10 < jclift> Any objections, else moving on to the next one?
15:10 < msvbhat> jclift: There are some other xlators which can be put in glusterfs-extra-xlators.
15:10 < msvbhat> jclift: Like errorgen and read-only ?
15:10 < doekia> my 2 cents question ... (debian wheezy), the init.d scripts mention $fs_remote as dependency ... isn't it the other way arround? ei: gluster provides the $fs_remote
15:10 < jclift> msvbhat: No objections here.  Bring it up for discussion on the mailing list?
15:10  * msvbhat will talk to hagarth if he is taking care of it
15:10 < ndevos> msvbhat: yeah, but that is a 2nd step
15:11 < msvbhat> ndevos: jclift: Okay...
15:11 < ndevos> msvbhat: main concern was that glusterfs-server does not require python atm, and correct glupy packaging would pull that in
15:12 < ndevos> (mumble mumble cloud images....)
15:12 < jclift> doekia: Good question.  purpleidea might know?
15:12 < msvbhat> ndevos: Ahh... Okay... I wasn't there at last meeting... :(
15:12 < ndevos> msvbhat: np!
15:12 < jclift> doekia: Actually, it's probably a better discussion for #gluster-devel (IRC and/or mailing list) :)
15:13 < purpleidea> doekia: jclift: semiosis is probably the best bet for debian stuff
15:13 < jclift> msvbhat: Last meeting logs: http://meetbot.fedoraproject.org/gluster-meeting/2014-02-19/gluster-meeting.2014-02-19-15.00.log.html
15:13 < jclift> (but that's a bit much to read right now :>)
15:13  * msvbhat made a mental note to read it later
15:14 < jclift> k, so next item from previous meeting: "hagarth to start a thread on review of snapshot patch"
15:14 -!- larsks [~larsks at unaffiliated/larsks] has joined #gluster-meeting
15:14 -!- aravindavk [~aravinda at 106.216.137.29] has joined #gluster-meeting
15:14 < jclift> I'm not sure if that's done or not.  Anyone know?
15:15 < jclift> 3
15:15 < jclift> 2
15:15  * ndevos missed the email, if there was one
15:15 < jclift> 1
15:15 -!- tdasilva [thiago at nat/redhat/x-vepycftwvbiyldpu] has joined #gluster-meeting
15:15 < kshlm> last I heard, there were more snapshot changes on the way
15:15 < jclift> Yeah, I'll leave it as "still in progress"
15:15 < kshlm> so we were asked to wait a couple of days before reviewing
15:15 < purpleidea> jclift: also on the note of snapshotting, i'm trying to get the automation aspects done-- if someone has the recommended list of lvm commands to run to provide the right thing, i'd love to see them.
15:15 < msvbhat> Rajesh has sent a mail go gluster-devel on 2st Feb
15:15 < msvbhat> 21st Feb
15:16  * purpleidea waves at tdasilva 
15:16 -!- social [~social at ip-89-102-175-94.net.upcbroadband.cz] has joined #gluster-meeting
15:16 < jclift> msvbhat: Ahhh, k.  Looks like this item's done then.  eg the initial email is sent
15:16  * jclift marks it off as complete
15:17 < msvbhat> purpleidea: I think I have it. Wil send it you later
15:17 < msvbhat> purpleidea: Will ping you off line about it.
15:17 < jclift> Next item from last meeting: "kkeithley to look into rpm.t failure"
15:17 < ndevos> #link http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5738
15:17 < ndevos> (thats for the snapshot)
15:18 < jclift> That one's complete.  Niels took time over a weekend to get funky on it and found + fixed the root cause.
15:18 < purpleidea> msvbhat: _much_ appreciated!
15:19 < jclift> next item "lalatenduM to set up bug triage process page in wiki" is already marked on the etherpad by lala as still being worked on.
15:19 < lalatenduM> jclift, yes
15:19 < jclift> next item after that: "jclift_ and johnmark to update guidelines on community standards"
15:19 < jclift> I've been trying to catch up with johnmark through the last week, but he's been super busy recently.
15:20 < lalatenduM> jclift, I agree :)
15:20 < jclift> I'm hoping to have time to discuss with him by next meeting.
15:20 < purpleidea> jclift: he'll be back in westford thursday
15:20 < jclift> Cool. :)
15:20 < jclift> k, last action item from previous meeting: "hagarth to send out a note on abandoning patches over 1yo"
15:20 < jclift> AFAIK this is still to be done
15:21 < jclift> So I'm thinking it'll likely happen this week
15:21 < jclift> Now, this weeks agenda... :)
15:21 < jclift> #topic build.gluster.org
15:22 < kkeithley> just a friendly reminder to clean up after yourselves
15:22 < kkeithley> and make sure tests clean up properly
15:22 < jclift> kkeithley: It comes down to something not cleaning up loopback mounts?
15:23 < kkeithley> loopback mounts tied to orphaned dm (device mapper) volumes
15:23 -!- ts468 [~ts468 at dynamic23.vpdn.csx.cam.ac.uk] has joined #gluster-meeting
15:23 < ndevos> I suspect some bd-xlator test, but have not looked into the details
15:23 < jclift> Do we know if it's something resulting from manual runs/effort/something on the box, or is it a side effect of our tests not cleaning up properly atm?
15:24 < kkeithley> most of the dm volumes had "snap" in their names
15:24 < jclift> eg we'll need to figure out what test is causing the problem, and then fix it
15:24 < jclift> kkeithley: Ahhh.
15:25 < jclift> k.  Is there an action item or policy change or something we should do here?
15:25 < ndevos> hmm, "snap", what patch could have caused that?
15:25 < ndevos> well,  still hope that we can setup a temporary vm to run tests, and throw it aways afterwards
15:26 < jclift> ndevos: Any of the ~90 snapshot patches? (that were all merged into 1?)
15:26 < kkeithley> a policy of being careful that you or your test don't leave the system in a funky state that breaks subsequent regression tests
15:26 < ndevos> jclift: just a guess
15:26 < jclift> Yeah
15:26  * ndevos calls that "common sense"
15:26 < jclift> kkeithley: Kind of thinking the "do stuff in temp vms" might have advantages here too
15:27 < jclift> After all, we're definitely going to have patches come through occasionally that's aren't up to scratch and don't do what they should
15:27 < jclift> Having such patches then screw up our testing env is kind non-optimal
15:28 < jclift> Anyway, it's food for thought
15:28 < jclift> Next item
15:28 < ndevos> I dont think everyone can start regression tests, just be careful when you start one
15:28 < jclift> #topic Scaling jenkins infrastructure
15:28 < jclift> kkeithley: Info on the 40+ machines mentioned on the etherpad?
15:29 -!- aravindavk [~aravinda at 106.216.137.29] has quit [Ping timeout: 265 seconds]
15:29 < kkeithley> well, pm me for a pointer to the internal document describing what we'll be rolling out soon. Not sure it's appropriate to go into more detail here (although I might be wrong).
15:30 < kkeithley> Suffice it to say, we have lots of machines that will be coming on line Any Day Now that we can throw at the problem
15:30 < kkeithley> plus RackSpace instances
15:30 < lalatenduM> kkeithley, awesome!
15:31 < jdarcy> Another advantage of running tests in VMs/containers that get recycled frequently is that we'll catch any "oh, that just happens to be on the test machine" dependencies.
15:31 -!- aravindavk [~aravinda at 106.216.137.29] has joined #gluster-meeting
15:31 < jclift> kkeithley: I'm kind of inclined to think that if it's upstream Community purposed machines (eg not specifically for RHS), then discussing here should be fine as long as there's no confidential info, etc.
15:31 < jdarcy> Including version-skew stuff.
15:32 < jclift> jdarcy: Yeah.  I get worried about the software that's on build.gluster.org a lot
15:32 < jclift> kkeithley: That being said, my preference for "lets discuss here"... is only just me.  I'm only temping in the meeting leader role. ;)
15:33 < kkeithley> indeed. The, the machines came from the lab in the old Gluster, Inc. Sunnyvale lab. We finally have space to get them on-line again, and that's what's happening
15:33 < purpleidea> jdarcy: it would be easy to have a machine that builds and runs tests in pristine vagrant/puppet/gluster images, and blows them away at the end of each test... fyi
15:33 < purpleidea> maybe useful to run once in a blue moon to catch any errors caused by unclean machines perhaps
15:33 < jclift> purpleidea: That sounds like an optimal way of spinning things up in RAX or similar yeah?
15:34 < purpleidea> jclift: it's a great way to test... it's how i test... i'm not sure what RAX is though
15:34 < jclift> On that note, lpabon said last meeting that he has access to RAX (Rackspace) through Johnmark.
15:34 < kkeithley> using vagrant+puppet on these machines is a great idea. Right now jenkins (and gerrit) are running as vm guests and don't have enough disk or horsepower by themselves to do that.
15:35 < kkeithley> yes, we can do that with rackspace vm guests as well. And why not do both?
15:35 < jclift> I'll ping lpabon + johnmark to see what're we're allowed to do with those credentials and stuff
15:35 < purpleidea> kkeithley: right. good point. you could theoretically do nested vm's if the vm's were heavy, but running the vagrant tests on iron is better
15:35  * social thinks optimal long term testing data would also come if gluster got somewhere into internal fedora infrastructure as a backend for for example git
15:36 -!- kdhananjay [~krutika at 122.167.96.113] has quit [Quit: Leaving.]
15:36 < jclift> purpleidea: Do you have the time + inclination to try setting up this stuff in rackspace vm's if we get the credentials + associated info to you?
15:36 < purpleidea> jclift: TBD, but more likely if someone helps with the jenkins glue, i can help with the vagrant side
15:36 < jclift> #action jclift to get the Rackspace info and credentials from lpabon + johnmark
15:36 < kkeithley> I wasn't thinking nested vms. Just use vagrant+puppet to deploy rackspace vm instances on demand.
15:37 < jclift> kkeithley: Yeah, that's what I was thinking too
15:37 < jclift> Who do we have that knows Jenkins well?
15:37 < purpleidea> kkeithley: that's a good idea! actually, the best idea
15:38 < lalatenduM> I can help
15:38 < jclift> Cool
15:38 < jclift> #action lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster
15:39 < jclift> I'll get the Rackspace info to you guys when I have it
15:39 < purpleidea> jclift: cool. email me, and i'll send you my gpg key
15:39 < lalatenduM> jclift, purpleidea cool
15:39 < jclift> #action jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff
15:40 < jclift> On this topic, a new guy has joined my team in Red Hat (OSAS).  He's a SysAdmin background guy who's pretty good, and might have time to help us out with upstream tasks.
15:40 < jclift> Not sure, but it's possible.
15:41 < jclift> Just food for thought, etc. ;)
15:41 < jclift> k, anything else on this topic, or move along?
15:41 < jclift> 3
15:41 < jclift> 2
15:41 < jclift> 1
15:41 < jclift> #topic 3.5.0
15:41 < jclift> beta4 to be out this week
15:42 < jclift> (that's what I'm reading)
15:42 < jclift> kkeithley: You do that don't you?
15:42 < msvbhat> jclift: Cool. When is it sceduled to release?
15:42 < kkeithley> hagarth is doing 3.5.0.  Once he releases I fire off rpm building for download.gluster.org
15:43 < jclift> Ahhh, k.
15:43 < msvbhat> aravindavk: Do you happen to know if patches to fix geo-rep upstream have gone in?
15:43 < jclift> In that case, it's just info from the etherpad then.
15:43 < jclift> msvbhat: Which ones?
15:44 < msvbhat> jclift: beta3 had couple of geo-rep issues (deletes not syncing, faulty states etc)
15:44 -!- aravindavk [~aravinda at 106.216.137.29] has quit [Ping timeout: 240 seconds]
15:44 < msvbhat> Not sure if the patch has been sent yo fix them
15:44 < jclift> msvbhat: k. On the etherpad there's an item underneath 3.5.0 saying "Feedback on geo-replication and quota testing awaited - AI: Vijay".
15:45 < msvbhat> jclift: Okay. I will talk with Vijay.
15:45 -!- anoopcs [~Thunderbi at 122.167.114.252] has joined #gluster-meeting
15:45 < jclift> #action msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already
15:45 < jclift> (that's a long action item)
15:46 < jdarcy> Have to go pick up my wife's car and then drive in to the office.  See y'all later.
15:46 -!- jdarcy [~jdarcy at pool-173-76-204-4.bstnma.fios.verizon.net] has quit [Quit: Lingo - http://www.lingoirc.com]
15:46 < jclift> k, next item
15:46 < jclift> #topic 3.4.3
15:46 < jclift> Tracker BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1060259
15:46 < glusterbot> Bug 1060259: unspecified, unspecified, ---, kkeithle, NEW , 3.4.3 tracker
15:47 < kkeithley> a couple patches are still in need of review.
15:47 < purpleidea> jclift: lpabon wants in on the jenkins stuff
15:47 < jclift> Looking at the etherpad, it seems like 3 requested patches are merged, but we have several still needing +2 reviews
15:47 < kkeithley> right
15:47 < jclift> #action jclift will include lpabon in the jenkins testing stuff
15:47 < kkeithley> and what about 977492/1008301?
15:48 < kkeithley> That fix hasn't been backported to 3.5 even.
15:48 < jclift> kkeithley: 977492 is the one requested by the Community Member on gluster-users the other day
15:48 < kkeithley> correct.
15:48 < jclift> I was just adding it to the list because he asked for it
15:48 < kkeithley> yes, I'm not questioning that part.
15:49 < jclift> kkeithley: So I guess we should backport it into 3.5 first, and then do 3.4.3?
15:49 < kkeithley> I'm just observing that it would be a teeny bit strange to fix it in 3.4 but not 3.5.
15:49 < jclift> Good point.
15:49 < kkeithley> It's a simple enough fix to backport
15:50 < kkeithley> and someone needs to actually provide a fix for 1041109
15:50 < jclift> kkeithley: Cool.  I skimmed over the BZ associated with it, but it's very lengthy.  If the patch makes sense to you, lets get it into 3.5 + 3.4.3 then.
15:50 < ndevos> what's the procedure here, one bug for all releases, or clone the bug for each release?
15:50  * ndevos prefers the 2nd, its easier for tracking progress
15:51 < jclift> I'm not bothered either way, but I'm guessing there's an existing convention
15:51 < jclift> kkeithley: Any idea?
15:51 < kkeithley> I personally was doing a BZ per branch, but standing practice seems to be to lump fixes for all the branches into a single BZ. I don't like that, but that's just my opinion.
15:51 < jclift> Let's follow the standing practise atm, and we can discuss with Vijay and team about changing that for future
15:52 < ndevos> sure
15:53 < jclift> #info kkeithley & ndevos prefere to have 1 BZ per branch for applying bug fixes, as it makes for easier tracking.  We should discuss with team to see if this can be made the policy approach
15:53 < jclift> Ok, so the BZ's still needing review are:
15:54 < kkeithley> in the etherpad
15:54 < jclift> Yeah
15:54 -!- vpshastry [~varun at 122.167.129.147] has joined #gluster-meeting
15:54 < jclift> How do we normally get focus time on them?  Asking for reviewers on #gluster-devel ml?
15:55  * jclift is happy to try and draw attention to them that way
15:55 < jclift> (Ugh, we're nearly out of time)
15:55 < ndevos> I tend to check the MAINTAINERS file and add some likely candidates to the review request ;)
15:55 < kkeithley> begging and pleading
15:56 < jclift> ndevos: k, I can try doing that to see if it helps
15:56 < jclift> kkeithley: With 1041109, any idea of the effort it'll take to create a fix?
15:56 < kkeithley> nope, no idea
15:57 < kkeithley> if we don't get a fix, and reviewed, I'll just drop it from 3.4.3
15:57 < jclift> k
15:57 < kkeithley> don't get a fix PDQ
15:57 < jclift> Sounds unlikely
15:57 < jclift> :/
15:57 -!- kdhananjay [~krutika at 122.167.96.113] has joined #gluster-meeting
15:57 < kkeithley> because we need to get 3.4.3 out before too much longer
15:57 < jclift> Yeah
15:58 -!- aravindavk [~aravinda at 117.96.0.52] has joined #gluster-meeting
15:58 < jclift> If we don't get it, we'll just push to 3.4.4
15:58 < jclift> k, I'm not sure what else to do for this agenda item (3.4.3)
15:58 < jclift> Any objections to moving on the the next one with our last 2 mins?
15:59 < jclift> #topic Gluster 3.6
15:59 < jclift> Apparently there's a Go/No-go meeting to be scheduled next week (according to the etherpad)
15:59 < jclift> That's all I personally know atm
15:59 < jclift> Anyone?
16:00 < jclift> k, that's time
16:00 < jclift> #endmeeting
16:00 < jclift> (unless anyone objects) ;)
16:01 < kkeithley> bye
16:01 < jclift> k, That's the end of the meeting.  Thanks everyone. :)
16:01 < ndevos> thanks!
16:01 < purpleidea> thanks
16:01 < jclift> We'll move the items we didn't get up to, to next week.
16:02 < jclift> (none of the ones we missed seemed super immediate)
16:02 -!- ndevos [ndevos at redhat/ndevos] has left #gluster-meeting ["Meeting finished!"]
16:02 -!- tdasilva [thiago at nat/redhat/x-vepycftwvbiyldpu] has left #gluster-meeting []
16:03 -!- zodbot [supybot at fedora/bot/zodbot] has joined #gluster-meeting

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift





More information about the Gluster-devel mailing list