[Gluster-devel] XFS kernel panic bug?
Niels de Vos
ndevos at redhat.com
Thu Jun 12 07:11:17 UTC 2014
On Thu, Jun 12, 2014 at 07:26:25AM +0100, Justin Clift wrote:
> On 12/06/2014, at 6:58 AM, Niels de Vos wrote:
> <snip>
> > If you capture a vmcore (needs kdump installed and configured), we may
> > be able to see the cause more clearly.
Oh, these seem to be Xen hosts. I don't think kdump (mainly kexec) works
on Xen. You would need to run xen-dump (or something like that) on the
Dom0, for that, you'll have to call Rackspace support, and I have no
idea how they handle such requests...
> That does help, and so will Harsha's suggestion too probably. :)
That is indeed a solution that can mostly prevent such memory
dead-locks. Those options can be used to configure to push out the
outstanding data earlier to the loop-devices, and to the underlying XFS
filesystem that hold the backing files for the loop-devices.
Cheers,
Niels
> I'll look into it properly later on today.
>
> For the moment, I've rebooted the other slaves which seems to put them into
> an ok state for a few runs.
>
> Also just started some rackspace-regression runs on them, using the ones
> queued up in the normal regression queue.
>
> The results are being updated live into Gerrit now (+1/-1/MERGE CONFLICT).
>
> So, if you see any regression runs pass on the slaves, it's worth removing
> the corresponding job from the main regression queue. That'll help keep
> the queue shorter for today at least. :)
>
> Btw - Happy vacation Niels :)
>
> /me goes to bed
>
> + Justin
>
> --
> GlusterFS - http://www.gluster.org
>
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
>
> My personal twitter: twitter.com/realjustinclift
>
More information about the Gluster-devel
mailing list