[Gluster-infra] slave21 and 23 offline due to disk full

Raghavendra Talur rtalur at redhat.com
Wed Sep 9 17:42:57 UTC 2015


On Wed, Sep 9, 2015 at 8:21 PM, Michael Scherer <mscherer at redhat.com> wrote:

> Hi,
>
> just found out that slave21 and 23 were  offline due to their disk being
> full. The issue is 14G of log, due to something creating tarball
> of /var/log/glusterfs, and place the tarball in /var/log/glusterfs/
>
> Ndevos say the bug is fixed, but I would rather investigate in more
> details, someone has some information, pointer ?
>
> (slave26 and slave46 are however just without ssh, so going to reboot
> them)
>

I am the culprit.
This patch http://review.gluster.org/#/c/12109/ is the reason and it has
been taken care of.

The test ran on sept 5th. If the slave had a little space left and executed
next build then
the logs must have got cleared and it should be fine. Sorry for the
trouble, will be more careful
next time I am messing with test infra scripts.

A better fix is posted at http://review.gluster.org/#/c/12110/ and waiting
for reviews!


--
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
> _______________________________________________
> Gluster-infra mailing list
> Gluster-infra at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-infra/attachments/20150909/6ed1b35f/attachment.html>


More information about the Gluster-infra mailing list