[Gluster-infra] slave21 and 23 offline due to disk full
rtalur at redhat.com
Wed Sep 9 17:42:57 UTC 2015
On Wed, Sep 9, 2015 at 8:21 PM, Michael Scherer <mscherer at redhat.com> wrote:
> just found out that slave21 and 23 were offline due to their disk being
> full. The issue is 14G of log, due to something creating tarball
> of /var/log/glusterfs, and place the tarball in /var/log/glusterfs/
> Ndevos say the bug is fixed, but I would rather investigate in more
> details, someone has some information, pointer ?
> (slave26 and slave46 are however just without ssh, so going to reboot
I am the culprit.
This patch http://review.gluster.org/#/c/12109/ is the reason and it has
been taken care of.
The test ran on sept 5th. If the slave had a little space left and executed
next build then
the logs must have got cleared and it should be fine. Sorry for the
trouble, will be more careful
next time I am messing with test infra scripts.
A better fix is posted at http://review.gluster.org/#/c/12110/ and waiting
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
> Gluster-infra mailing list
> Gluster-infra at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-infra