[Gluster-devel] Regression testing results for master branch
Kaushal M
kshlmster at gmail.com
Thu May 22 06:15:23 UTC 2014
It should be possible. I'll check and do the change.
~kaushal
On Thu, May 22, 2014 at 8:14 AM, Pranith Kumar Karampuri <
pkarampu at redhat.com> wrote:
>
>
> ----- Original Message -----
> > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> > To: "Justin Clift" <justin at gluster.org>
> > Cc: "Gluster Devel" <gluster-devel at gluster.org>
> > Sent: Thursday, May 22, 2014 6:23:16 AM
> > Subject: Re: [Gluster-devel] Regression testing results for master branch
> >
> >
> >
> > ----- Original Message -----
> > > From: "Justin Clift" <justin at gluster.org>
> > > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> > > Cc: "Gluster Devel" <gluster-devel at gluster.org>
> > > Sent: Wednesday, May 21, 2014 11:01:36 PM
> > > Subject: Re: [Gluster-devel] Regression testing results for master
> branch
> > >
> > > On 21/05/2014, at 6:17 PM, Justin Clift wrote:
> > > > Hi all,
> > > >
> > > > Kicked off 21 VM's in Rackspace earlier today, running the regression
> > > > tests
> > > > against master branch.
> > > >
> > > > Only 3 VM's failed out of the 21 (86% PASS, 14% FAIL), with all three
> > > > being
> > > > for the same test:
> > > >
> > > > Test Summary Report
> > > > -------------------
> > > > ./tests/bugs/bug-948686.t (Wstat: 0 Tests: 20
> > > > Failed:
> > > > 2)
> > > > Failed tests: 13-14
> > > > Files=230, Tests=4373, 5601 wallclock secs ( 2.09 usr 1.58 sys +
> 1012.66
> > > > cusr 688.80 csys = 1705.13 CPU)
> > > > Result: FAIL
> > >
> > >
> > > Interestingly, this one looks like a simple time based thing
> > > too. The failed tests are the ones after the sleep:
> > >
> > > ...
> > > #modify volume config to see change in volume-sync
> > > TEST $CLI_1 volume set $V0 write-behind off
> > > #add some files to the volume to see effect of volume-heal cmd
> > > TEST touch $M0/{1..100};
> > > TEST $CLI_1 volume stop $V0;
> > > TEST $glusterd_3;
> > > sleep 3;
> > > TEST $CLI_3 volume start $V0;
> > > TEST $CLI_2 volume stop $V0;
> > > TEST $CLI_2 volume delete $V0;
> > >
> > > Do you already have this one on your radar?
> >
> > It wasn't, thanks for bringing it on my radar :-). Sent
> > http://review.gluster.org/7837 to address this.
>
> Kaushal,
> I made this fix based on the assumption that the script seems to be
> waiting for all glusterds to be online. I could not check the logs because
> glusterds spawned by cluster.rc seem to be storing the logs not in the
> default location. Do you think we can make changes to the script so that we
> can get logs from glusterds spawned by cluster.rc as well?
>
> Pranith
>
> >
> > Pranith
> >
> > >
> > > + Justin
> > >
> > > --
> > > Open Source and Standards @ Red Hat
> > >
> > > twitter.com/realjustinclift
> > >
> > >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> >
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140522/08d7d764/attachment.html>
More information about the Gluster-devel
mailing list