[Gluster-infra] slave26 was full, slave23 disappeared

Vijay Bellur vbellur at redhat.com
Wed Mar 18 07:15:44 UTC 2015


On 03/18/2015 10:29 AM, Vijay Bellur wrote:
> On 03/18/2015 03:10 AM, Justin Clift wrote:
>> On 17 Mar 2015, at 18:03, Michael Scherer <mscherer at redhat.com> wrote:
>>> Hi,
>>>
>>> so slave 26 was full ( some 35G tar in /archiveds build ). I fixed, so
>>> that one is good. it was just a lot of core files.
>>
>> Yeah, seems to happen occasionally.  I've just been nuking the oversized
>> tar file which fills the partition, and restarting the box.
>>
>> We should probably get one of these and look into why the core files
>> are doing this occasionally.  We don't want to let a bug through into
>> production builds that might do this... :/
>>
>
> slave27 is now full :-/. Can anybody please help?
>
> {standard input}: Assembler messages:
> {standard input}:84778: Fatal error: can't write .libs/nfs3.o: No space
> left on device
> {standard input}:84778: Fatal error:make[5]: *** [nfs3.lo] Error 1
>

and now slave20 is full. can somebody please provide a backtrace from 
one of the cores?

Thanks,
Vijay



More information about the Gluster-infra mailing list