[Gluster-infra] slave21 and 23 offline due to disk full
Michael Scherer
mscherer at redhat.com
Wed Sep 9 14:51:30 UTC 2015
Hi,
just found out that slave21 and 23 were offline due to their disk being
full. The issue is 14G of log, due to something creating tarball
of /var/log/glusterfs, and place the tarball in /var/log/glusterfs/
Ndevos say the bug is fixed, but I would rather investigate in more
details, someone has some information, pointer ?
(slave26 and slave46 are however just without ssh, so going to reboot
them)
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://www.gluster.org/pipermail/gluster-infra/attachments/20150909/da79c2a6/attachment.sig>
More information about the Gluster-infra
mailing list