[Gluster-devel] [Gluster-users] VMs blocked for more than 120 seconds

Krutika Dhananjay kdhananj at redhat.com
Mon May 13 07:19:25 UTC 2019


What version of gluster are you using?
Also, can you capture and share volume-profile output for a run where you
manage to recreate this issue?
https://docs.gluster.org/en/v3/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command
Let me know if you have any questions.

-Krutika

On Mon, May 13, 2019 at 12:34 PM Martin Toth <snowmailer at gmail.com> wrote:

> Hi,
>
> there is no healing operation, not peer disconnects, no readonly
> filesystem. Yes, storage is slow and unavailable for 120 seconds, but why,
> its SSD with 10G, performance is good.
>
> > you'd have it's log on qemu's standard output,
>
> If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking
> for problem for more than month, tried everything. Can’t find anything. Any
> more clues or leads?
>
> BR,
> Martin
>
> > On 13 May 2019, at 08:55, lemonnierk at ulrar.net wrote:
> >
> > On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
> >> Hi all,
> >
> > Hi
> >
> >>
> >> I am running replica 3 on SSDs with 10G networking, everything works OK
> but VMs stored in Gluster volume occasionally freeze with “Task XY blocked
> for more than 120 seconds”.
> >> Only solution is to poweroff (hard) VM and than boot it up again. I am
> unable to SSH and also login with console, its stuck probably on some disk
> operation. No error/warning logs or messages are store in VMs logs.
> >>
> >
> > As far as I know this should be unrelated, I get this during heals
> > without any freezes, it just means the storage is slow I think.
> >
> >> KVM/Libvirt(qemu) using libgfapi and fuse mount to access VM disks on
> replica volume. Can someone advice  how to debug this problem or what can
> cause these issues?
> >> It’s really annoying, I’ve tried to google everything but nothing came
> up. I’ve tried changing virtio-scsi-pci to virtio-blk-pci disk drivers, but
> its not related.
> >>
> >
> > Any chance your gluster goes readonly ? Have you checked your gluster
> > logs to see if maybe they lose each other some times ?
> > /var/log/glusterfs
> >
> > For libgfapi accesses you'd have it's log on qemu's standard output,
> > that might contain the actual error at the time of the freez.
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190513/262acc1b/attachment.html>


More information about the Gluster-devel mailing list