<div dir="ltr">I'm not so sure the problem is with sharding. Basically it's saying that seek is not supported, which means that something between shard and the bricks doesn't support it. DHT didn't support seek before 10.3, but if I'm not wrong you are already using 10.3, so the message is weird. But in any case this shouldn't cause a crash. The stack trace seems to indicate that the crash happens inside disperse, but without a core dump there's little more that I can do.<div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Dec 1, 2022 at 5:27 PM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Well, that last more time, but it crashed once again, same node, same mountpoint... fortunately, I've moved preventively all the VMs to the underlying ZFS filesystem those past days, so none of them have been affected this time...<div><br></div><div>dmesg show this</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-12-01 15:49:54] INFO: task iou-wrk-637144:946532 blocked for more than 120 seconds.
</span><br>[2022-12-01 15:49:54] Tainted: P IO 5.15.74-1-pve #1
<br>[2022-12-01 15:49:54] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<br>[2022-12-01 15:49:54] task:iou-wrk-637144 state:D stack: 0 pid:946532 ppid: 1 flags:0x00004000
<br>[2022-12-01 15:49:54] Call Trace:
<br>[2022-12-01 15:49:54] <TASK>
<br>[2022-12-01 15:49:54] __schedule+0x34e/0x1740
<br>[2022-12-01 15:49:54] ? kmem_cache_free+0x271/0x290
<br>[2022-12-01 15:49:54] ? mempool_free_slab+0x17/0x20
<br>[2022-12-01 15:49:54] schedule+0x69/0x110
<br>[2022-12-01 15:49:54] rwsem_down_write_slowpath+0x231/0x4f0
<br>[2022-12-01 15:49:54] ? ttwu_queue_wakelist+0x40/0x1c0
<br>[2022-12-01 15:49:54] down_write+0x47/0x60
<br>[2022-12-01 15:49:54] fuse_file_write_iter+0x1a3/0x430
<br>[2022-12-01 15:49:54] ? apparmor_file_permission+0x70/0x170
<br>[2022-12-01 15:49:54] io_write+0xf6/0x330
<br>[2022-12-01 15:49:54] ? update_cfs_group+0x9c/0xc0
<br>[2022-12-01 15:49:54] ? dequeue_entity+0xd8/0x490
<br>[2022-12-01 15:49:54] io_issue_sqe+0x401/0x1fc0
<br>[2022-12-01 15:49:54] ? lock_timer_base+0x3b/0xd0
<br>[2022-12-01 15:49:54] io_wq_submit_work+0x76/0xd0
<br>[2022-12-01 15:49:54] io_worker_handle_work+0x1a7/0x5f0
<br>[2022-12-01 15:49:54] io_wqe_worker+0x2c0/0x360
<br>[2022-12-01 15:49:54] ? finish_task_switch.isra.0+0x7e/0x2b0
<br>[2022-12-01 15:49:54] ? io_worker_handle_work+0x5f0/0x5f0
<br>[2022-12-01 15:49:54] ? io_worker_handle_work+0x5f0/0x5f0
<br>[2022-12-01 15:49:54] ret_from_fork+0x1f/0x30
<br>[2022-12-01 15:49:54] RIP: 0033:0x0
<br>[2022-12-01 15:49:54] RSP: 002b:0000000000000000 EFLAGS: 00000207
<br>[2022-12-01 15:49:54] RAX: 0000000000000000 RBX: 0000000000000011 RCX: 0000000000000000
<br>[2022-12-01 15:49:54] RDX: 0000000000000001 RSI: 0000000000000001 RDI: 0000000000000120
<br>[2022-12-01 15:49:54] RBP: 0000000000000120 R08: 0000000000000001 R09: 00000000000000f0
<br>[2022-12-01 15:49:54] R10: 00000000000000f8 R11: 00000001239a4128 R12: ffffffffffffdb90
<br>[2022-12-01 15:49:54] R13: 0000000000000001 R14: 0000000000000001 R15: 0000000000000100
<br>[2022-12-01 15:49:54] </TASK><br></span></div><div><br></div><div>My gluster volume log shows plenty of error like this</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">The message "I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not supported]" repeated 1564 times between [2022-12-01 00:20:09.578233 +0000] and [2022-12-01 00:22:09.436927 +0000]
</span><br>[2022-12-01 00:22:09.516269 +0000] I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not supported]<br></span></div><div><br></div><div>and of this</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-12-01 09:05:08.525867 +0000] I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on 3ed993c4-bbb5-4938-86e9-6d22b8541e8e. [Operation not supported]</span><br></span></div><div><br></div><div>Then simply the same </div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">pending frames:
</span><br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(1) op(FSYNC)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a>
<br>signal received: 11
<br>time of crash: <br>2022-12-01 14:45:14 +0000
<br>configuration details:
<br>argp 1
<br>backtrace 1
<br>dlfcn 1
<br>libpthread 1
<br>llistxattr 1
<br>setfsid 1
<br>epoll.h 1
<br>xattr.h 1
<br>st_atim.tv_nsec 1
<br>package-string: glusterfs 10.3
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f1e23db3a54]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f1e23dbbfc0]
<br>/lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f1e23b76d60]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f1e200e9a14]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f1e200cb414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0xd072)[0x7f1e200bf072]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/performance/readdir-ahead.so(+0x316d)[0x7f1e200a316d]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/distribute.so(+0x5bdd4)[0x7f1e197aadd4]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x1e69c)[0x7f1e2008b69c]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x16551)[0x7f1e20083551]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x25abf)[0x7f1e20092abf]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x25d21)[0x7f1e20092d21]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x167be)[0x7f1e200837be]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x1c178)[0x7f1e20089178]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/utime.so(+0x7804)[0x7f1e20064804]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/performance/write-behind.so(+0x8164)[0x7f1e2004e164]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/performance/write-behind.so(+0x9228)[0x7f1e2004f228]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/performance/write-behind.so(+0x9a4d)[0x7f1e2004fa4d]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/utime.so(+0x29e5)[0x7f1e2005f9e5]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x12e59)[0x7f1e2007fe59]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0xc2c6)[0x7f1e200792c6]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/distribute.so(+0x69e90)[0x7f1e197b8e90]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(default_fxattrop_cbk+0x125)[0x7f1e23e27515]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x2421c)[0x7f1e200d621c]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f1e200cb414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f1e200c8373]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f1e200c90f9]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x1f929)[0x7f1e200d1929]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x469c2)[0x7f1e201859c2]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f1e23d5eccb]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f1e23d5a646]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f1e202784c8]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f1e2027f38c]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f1e23e0471d]
<br>/lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f1e23d1aea7]
<br>/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f1e23c3aa2f]<br>
<br></span></div><div>I'm still unable to gather any core dump.</div><div><br></div><div>I can barely read something intelligible from all of this, but it's clearly something happening with sharding here. So I'm going to empty all the volume and destroy it completely and re-create another volume without sharding, and see what happens</div><div><br><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El vie, 25 nov 2022 a las 19:08, Angel Docampo (<<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I did also notice about that loop0... AFAIK, I wasn't using any loop device, at least consciously.<div>After looking for the same messages at the other gluster/proxmox nodes, I saw no trace of it.</div><div>Then I saw on that node, there is a single LXC container, which disk is living on the glusterfs, and effectively, is using ext4.</div><div>After the crash of today, I was unable to boot it up again, and the logs became silent, I did just try to boot it up, and immediately appeared this on dmesg</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-11-25 18:04:18] loop0: detected capacity change from 0 to 16777216
</span><br>[2022-11-25 18:04:18] EXT4-fs (loop0): error loading journal
<br>[2022-11-25 18:05:26] loop0: detected capacity change from 0 to 16777216
<br>[2022-11-25 18:05:26] EXT4-fs (loop0): INFO: recovery required on readonly filesystem
<br>[2022-11-25 18:05:26] EXT4-fs (loop0): write access unavailable, cannot proceed (try mounting with noload)<br>
<br></span></div><div>And the LXC container didn't boot up. I've manually moved the LXC container to the underlying ZFS where gluster lives, and the LXC booted up and the dmesg log shows</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-11-25 18:24:06] loop0: detected capacity change from 0 to 16777216
</span><br>[2022-11-25 18:24:06] EXT4-fs warning (device loop0): ext4_multi_mount_protect:326: MMP interval 42 higher than expected, please wait.
<br>[2022-11-25 18:24:50] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.<br>
<br></span></div>So, to recapitulate:<br><div>- the loop device on the host relies on the LXC, is not surprising, but I didn't know it. <br></div><div>- the LXC container had a lot of I/O issues just before the two crashes, the crash from today, and the crash 4 days ago, this Monday</div><div>- as side note, this gluster is in production since last Thursday, so the first crash was exactly 4 days since this LXC was started with the storage on the gluster, and exactly 4 days after, it crashed again.</div><div>- this crashes began to happen since the upgrade to gluster 10.3, because it was working just fine with former versions of gluster (from 3.X to 9.X), and from proxmox 5.X to proxmox 7.1, when all the issues begun, now I'm on proxmox 7.2.</div><div>- underlying ZFS where gluster is, has no ZIL or ZLOG (it had before the upgrade to gluster 10.3, but as I had to re-create the gluster, I decided not to add them because all my disks are SSD, so there is no need to add any of those), I've added them to test if the LXC container caused the same issues, it did, so they don't seem to make any difference.</div><div>- there are more loop0 I/O errors on the dmesg besides the days of the crash, but there are just "one" error per day, and not all days, but the days gluster mountpoint become inaccessible, there are tens of errors per millisecond just before the crash</div><div><br></div><div>I'm going to get rid of that LXC, as now I'm migrating from VMs to K8s (living in a VM cluster inside proxmox), I was ready to convert this as well, now is a must.</div><div><br></div><div>I don't know if anyone at gluster can replicate this scenario (proxmox + gluster distributed disperse + LXC on a gluster directory), to see if it can be reproducible. I know this must be a corner case, just wondering why stopped working, if it is a bug on GlusterFS 10.3, a bug in LXC or in Proxmox 7.1 upwards (where I'm going to post this now, but Proxmox probably won't be interested as they explicitly suggest mounting glusterfs with the gluster client, and not to map a directory where gluster is mounted via fstab)</div><div><br></div><div>Thank you a lot Xavi, I will monitor dmesg to make sure all those loop errors disappear, and hopefully I won't have a crash next Tuesday. :)</div><div><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El vie, 25 nov 2022 a las 13:25, Xavi Hernandez (<<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">What is "loop0" it seems it's having some issue. Does it point to a Gluster file ?<div><br></div><div>I also see that there's an io_uring thread in D state. If that one belongs to Gluster, it may explain why systemd was unable to generate a core dump (all threads need to be stopped to generate a core dump, but a thread blocked inside the kernel cannot be stopped).</div><div><br></div><div>If you are using io_uring in Gluster, maybe you can disable it to see if it's related.</div><div><br></div><div>Xavi</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Nov 25, 2022 at 11:39 AM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Well, just happened again, the same server, the same mountpoint.<div><br></div><div>I'm unable to get the core dumps, coredumpctl says there are no core dumps, it would be funny if I wasn't the one suffering it, but systemd-coredump service crashed as well</div><div><span style="font-family:monospace"><span style="font-weight:bold;color:rgb(255,84,84)">●</span><span style="color:rgb(0,0,0)"> systemd-coredump@0-3199871-0.service - Process Core Dump (PID 3199871/UID 0)
</span><br> Loaded: loaded (/lib/systemd/system/systemd-coredump@.service; static)
<br> Active: <span style="font-weight:bold;color:rgb(255,84,84)">failed</span><span style="color:rgb(0,0,0)"> (Result: timeout) since Fri 2022-11-25 10:54:59 CET; 39min ago
</span><br>TriggeredBy: <span style="font-weight:bold;color:rgb(84,255,84)">●</span><span style="color:rgb(0,0,0)"> systemd-coredump.socket
</span><br> Docs: man:systemd-coredump(8)
<br> Process: 3199873 ExecStart=/lib/systemd/systemd-coredump (code=killed, signal=TERM)
<br> Main PID: 3199873 (code=killed, signal=TERM)
<br> CPU: 15ms
<br>
<br>Nov 25 10:49:59 pve02 systemd[1]: Started Process Core Dump (PID 3199871/UID 0).
<br>Nov 25 10:54:59 pve02 systemd[1]: <span style="font-weight:bold;color:rgb(215,215,95)">systemd-coredump@0-3199871-0.service: Service reached runtime time limit. Stopping.</span><span style="color:rgb(0,0,0)">
</span><br>Nov 25 10:54:59 pve02 systemd[1]: <span style="font-weight:bold;color:rgb(215,215,95)">systemd-coredump@0-3199871-0.service: Failed with result 'timeout'.</span><br><span style="color:rgb(0,0,0)">
</span><br></span></div><div><br></div><div>I just saw the exception on dmesg, </div><div><span style="font-family:monospace;color:rgb(0,0,0)">[2022-11-25 10:50:08] INFO: task kmmpd-loop0:681644 blocked for more than 120 seconds.
</span><br><font face="monospace">[2022-11-25 10:50:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:50:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:50:08] task:kmmpd-loop0 state:D stack: 0 pid:681644 ppid: 2 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:50:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:50:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:50:08] ? bit_wait+0x70/0x70
</font><br><font face="monospace">[2022-11-25 10:50:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_schedule+0x46/0x80
</font><br><font face="monospace">[2022-11-25 10:50:08] bit_wait_io+0x11/0x70
</font><br><font face="monospace">[2022-11-25 10:50:08] __wait_on_bit+0x31/0xa0
</font><br><font face="monospace">[2022-11-25 10:50:08] out_of_line_wait_on_bit+0x8d/0xb0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? var_wake_function+0x30/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] __wait_on_buffer+0x34/0x40
</font><br><font face="monospace">[2022-11-25 10:50:08] write_mmp_block+0x127/0x180
</font><br><font face="monospace">[2022-11-25 10:50:08] kmmpd+0x1b9/0x430
</font><br><font face="monospace">[2022-11-25 10:50:08] ? write_mmp_block+0x180/0x180
</font><br><font face="monospace">[2022-11-25 10:50:08] kthread+0x127/0x150
</font><br><font face="monospace">[2022-11-25 10:50:08] ? set_kthread_struct+0x50/0x50
</font><br><font face="monospace">[2022-11-25 10:50:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] INFO: task </font>iou-wrk-1511979<font face="monospace">:3200401 blocked for more than 120 seconds.
</font><br><font face="monospace">[2022-11-25 10:50:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:50:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:50:08] task:</font>iou-wrk-1511979<font face="monospace"> state:D stack: 0 pid:3200401 ppid: 1 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:50:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:50:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:50:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:50:08] rwsem_down_write_slowpath+0x231/0x4f0
</font><br><font face="monospace">[2022-11-25 10:50:08] down_write+0x47/0x60
</font><br><font face="monospace">[2022-11-25 10:50:08] fuse_file_write_iter+0x1a3/0x430
</font><br><font face="monospace">[2022-11-25 10:50:08] ? apparmor_file_permission+0x70/0x170
</font><br><font face="monospace">[2022-11-25 10:50:08] io_write+0xfb/0x320
</font><br><font face="monospace">[2022-11-25 10:50:08] ? put_dec+0x1c/0xa0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_issue_sqe+0x401/0x1fc0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_wq_submit_work+0x76/0xd0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_worker_handle_work+0x1a7/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_wqe_worker+0x2c0/0x360
</font><br><font face="monospace">[2022-11-25 10:50:08] ? finish_task_switch.isra.0+0x7e/0x2b0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] RIP: 0033:0x0
</font><br><font face="monospace">[2022-11-25 10:50:08] RSP: 002b:0000000000000000 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
</font><br><font face="monospace">[2022-11-25 10:50:08] RAX: 0000000000000000 RBX: 00007fdb1efef640 RCX: 00007fdd59f872e9
</font><br><font face="monospace">[2022-11-25 10:50:08] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000011
</font><br><font face="monospace">[2022-11-25 10:50:08] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
</font><br><font face="monospace">[2022-11-25 10:50:08] R10: 0000000000000000 R11: 0000000000000216 R12: 000055662e5bd268
</font><br><font face="monospace">[2022-11-25 10:50:08] R13: 000055662e5bd320 R14: 000055662e5bd260 R15: 0000000000000000
</font><br><font face="monospace">[2022-11-25 10:50:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] INFO: task kmmpd-loop0:681644 blocked for more than 241 seconds.
</font><br><font face="monospace">[2022-11-25 10:52:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:52:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:52:08] task:kmmpd-loop0 state:D stack: 0 pid:681644 ppid: 2 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:52:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:52:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:52:08] ? bit_wait+0x70/0x70
</font><br><font face="monospace">[2022-11-25 10:52:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_schedule+0x46/0x80
</font><br><font face="monospace">[2022-11-25 10:52:08] bit_wait_io+0x11/0x70
</font><br><font face="monospace">[2022-11-25 10:52:08] __wait_on_bit+0x31/0xa0
</font><br><font face="monospace">[2022-11-25 10:52:08] out_of_line_wait_on_bit+0x8d/0xb0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? var_wake_function+0x30/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] __wait_on_buffer+0x34/0x40
</font><br><font face="monospace">[2022-11-25 10:52:08] write_mmp_block+0x127/0x180
</font><br><font face="monospace">[2022-11-25 10:52:08] kmmpd+0x1b9/0x430
</font><br><font face="monospace">[2022-11-25 10:52:08] ? write_mmp_block+0x180/0x180
</font><br><font face="monospace">[2022-11-25 10:52:08] kthread+0x127/0x150
</font><br><font face="monospace">[2022-11-25 10:52:08] ? set_kthread_struct+0x50/0x50
</font><br><font face="monospace">[2022-11-25 10:52:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] INFO: task </font>iou-wrk-1511979<font face="monospace">:3200401 blocked for more than 241 seconds.
</font><br><font face="monospace">[2022-11-25 10:52:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:52:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:52:08] task:</font>iou-wrk-1511979<font face="monospace"> state:D stack: 0 pid:3200401 ppid: 1 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:52:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:52:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:52:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:52:08] rwsem_down_write_slowpath+0x231/0x4f0
</font><br><font face="monospace">[2022-11-25 10:52:08] down_write+0x47/0x60
</font><br><font face="monospace">[2022-11-25 10:52:08] fuse_file_write_iter+0x1a3/0x430
</font><br><font face="monospace">[2022-11-25 10:52:08] ? apparmor_file_permission+0x70/0x170
</font><br><font face="monospace">[2022-11-25 10:52:08] io_write+0xfb/0x320
</font><br><font face="monospace">[2022-11-25 10:52:08] ? put_dec+0x1c/0xa0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_issue_sqe+0x401/0x1fc0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_wq_submit_work+0x76/0xd0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_worker_handle_work+0x1a7/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_wqe_worker+0x2c0/0x360
</font><br><font face="monospace">[2022-11-25 10:52:08] ? finish_task_switch.isra.0+0x7e/0x2b0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] RIP: 0033:0x0
</font><br><font face="monospace">[2022-11-25 10:52:08] RSP: 002b:0000000000000000 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
</font><br><font face="monospace">[2022-11-25 10:52:08] RAX: 0000000000000000 RBX: 00007fdb1efef640 RCX: 00007fdd59f872e9
</font><br><font face="monospace">[2022-11-25 10:52:08] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000011
</font><br><font face="monospace">[2022-11-25 10:52:08] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
</font><br><font face="monospace">[2022-11-25 10:52:08] R10: 0000000000000000 R11: 0000000000000216 R12: 000055662e5bd268
</font><br><font face="monospace">[2022-11-25 10:52:08] R13: 000055662e5bd320 R14: 000055662e5bd260 R15: 0000000000000000
</font><br><font face="monospace">[2022-11-25 10:52:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:12] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:12] print_req_error: 7 callbacks suppressed
</font><br><font face="monospace">[2022-11-25 10:52:12] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:12] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:12] EXT4-fs error (device loop0): kmmpd:179: comm kmmpd-loop0: Error writing to MMP block
</font><br><font face="monospace">[2022-11-25 10:52:12] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:12] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:12] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4490452992, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4490457088, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8770416 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8770424 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Aborting journal on device loop0-8.
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4429185024, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8650752 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8650752 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Buffer I/O error on dev loop0, logical block 1081344, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] JBD2: Error -5 detected when updating journal superblock for loop0-8.
</font><br><font face="monospace">[2022-11-25 10:52:23] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:23] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:23] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:28] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:28] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:28] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:33] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:33] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:33] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:38] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:38] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:38] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:43] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:43] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:43] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:48] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:48] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:48] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:53] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:53] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:53] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:59] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:59] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:59] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:04] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:04] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:04] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:09] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:09] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:09] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:14] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:14] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:14] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:19] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:19] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:19] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:24] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:24] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:24] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:29] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:29] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:29] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:34] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:34] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:34] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:40] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:40] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:40] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:45] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:45] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:45] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:50] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:50] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:50] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:55] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:55] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:55] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:00] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:00] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:00] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:05] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:05] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:05] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:10] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:10] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:10] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:15] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:15] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:15] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:21] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:21] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:21] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:26] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:26] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:26] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:31] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:31] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:31] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:36] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:36] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:36] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:41] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:41] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:41] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:46] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:46] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:46] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:51] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:51] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:51] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:56] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:56] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:56] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:01] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:01] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:01] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs error (device loop0): ext4_journal_check_start:83: comm burp: Detected aborted journal
</font><br><font face="monospace">[2022-11-25 10:55:04] loop: Write error at byte offset 0, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:04] blk_update_request: I/O error, dev loop0, sector 0 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:04] blk_update_request: I/O error, dev loop0, sector 0 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:04] Buffer I/O error on dev loop0, logical block 0, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs (loop0): I/O error while writing superblock
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs (loop0): Remounting filesystem read-only
</font><br><font face="monospace">[2022-11-25 10:55:07] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:07] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:07] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:57:14] blk_update_request: I/O error, dev loop0, sector 16390368 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
</font><br><font face="monospace">[2022-11-25 11:03:45] device tap136i0 entered promiscuous mode
</font><br><br><font face="arial, sans-serif">I don't know if it is relevant somehow or it is unrelated to glusterfs, but the consequences are the mountpoint crashes, I'm forced to lazy unmount it and remount it back. Then restart all the VMs on there, unfortunately, this time several have the hard disk corrupted and now I'm restoring them from the backup.</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Any tip?<br></font><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El mar, 22 nov 2022 a las 12:31, Angel Docampo (<<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I've taken a look into all possible places they should be, and I couldn't find it anywhere. Some people say the dump file is generated where the application is running... well, I don't know where to look then, and I hope they hadn't been generated on the failed mountpoint.<div><br><div>As Debian 11 has systemd, I've installed systemd-coredump, so in the case a new crash happens, at least I will have the exact location and tool (coredumpctl) to find them and will install then the debug symbols, which is particularly tricky on debian. But I need to wait to happen again, now the tool says there isn't any core dump on the system.</div><div><br></div><div>Thank you, Xavi, if this happens again (let's hope it won't), I will report back.</div><div><br></div><div>Best regards!<br clear="all"><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El mar, 22 nov 2022 a las 10:45, Xavi Hernandez (<<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">The crash seems related to some problem in ec xlator, but I don't have enough information to determine what it is. The crash should have generated a core dump somewhere in the system (I don't know where Debian keeps the core dumps). If you find it, you should be able to open it using this command (make sure debug symbols package is also installed before running it):<div><br></div><div> # gdb /usr/sbin/glusterfs <path to core dump></div><div><br></div><div>And then run this command:</div><div><br></div><div> # bt -full</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 22, 2022 at 9:41 AM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Xavi, <div><br></div><div>The OS is Debian 11 with the proxmox kernel. Gluster packages are the official from <a href="http://gluster.org" target="_blank">gluster.org</a> (<a href="https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/" target="_blank">https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/</a>)</div><div><div><br></div><div>The system logs showed no other issues by the time of the crash, no OOM kill or whatsoever, and no other process was interacting with the gluster mountpoint besides proxmox.</div></div><div><br></div><div>I wasn't running gdb when it crashed, so I don't really know if I can obtain a more detailed trace from logs or if there is a simple way to let it running in the background to see if it happens again (or there is a flag to start the systemd daemon in debug mode).</div><div><br></div><div>Best, </div><div><br><div><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El lun, 21 nov 2022 a las 15:16, Xavi Hernandez (<<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Angel,</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Sorry for necrobumping this, but this morning I've suffered this on my Proxmox + GlusterFS cluster. In the log I can see this<div><br><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-11-21 07:38:00.213620 +0000] I [MSGID: 133017] [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on fbc063cb-874e-475d-b585-f89</span><br>f7518acdd. [Operation not supported]
<br><span style="color:rgb(255,255,255);background-color:rgb(0,0,0)">pending frames</span><span style="color:rgb(0,0,0)">:
</span><br>frame : type(1) op(WRITE)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)<br>
...</span></div><div><span style="color:rgb(0,0,0);font-family:monospace">frame : type(1) op(FSYNC)</span><br></div><div><span style="font-family:monospace">frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)</span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a>
</span><br>signal received: 11
<br>time of crash: <br>2022-11-21 07:38:00 +0000
<br>configuration details:
<br>argp 1
<br>backtrace 1
<br>dlfcn 1
<br>libpthread 1
<br>llistxattr 1
<br>setfsid 1
<br>epoll.h 1
<br>xattr.h 1
<br>st_atim.tv_nsec 1
<br>package-string: glusterfs 10.3
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]
<br>/lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f74ee16638c]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f74f28bc71d]
<br>/lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f74f27d2ea7]
<br>/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f74f26f2aef]
<br>---------<br>
</span><font face="arial, sans-serif">The mount point wasn't accessible with the "Tr<span style="color:rgb(0,0,0)">ansport endpoint is not connected" message and it was shown like this.</span><br></font></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">d????????? ? ? ? ? ? </span><span style="font-weight:bold;color:rgb(84,84,255)">vmdata</span><br><span style="color:rgb(0,0,0)">
</span><br></span><font face="arial, sans-serif">I had to stop all the VMs on that proxmox node, then stop the gluster daemon to ummount de directory, and after starting the daemon and re-mounting, all was working again.</font></div><div><span style="font-family:monospace"><br></span></div><div><span style="font-family:monospace">My gluster volume info returns this</span></div><div><span style="font-family:monospace"> <br>Volume Name: vmdata
<br>Type: Distributed-Disperse
<br>Volume ID: cace5aa4-b13a-4750-8736-aa179c2485e1
<br>Status: Started
<br>Snapshot Count: 0
<br>Number of Bricks: 2 x (2 + 1) = 6
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: g01:/data/brick1/brick
<br>Brick2: g02:/data/brick2/brick
<br>Brick3: g03:/data/brick1/brick
<br>Brick4: g01:/data/brick2/brick
<br>Brick5: g02:/data/brick1/brick
<br>Brick6: g03:/data/brick2/brick
<br>Options Reconfigured:
<br>nfs.disable: on
<br>transport.address-family: inet
<br>storage.fips-mode-rchecksum: on
<br>features.shard: enable
<br>features.shard-block-size: 256MB
<br>performance.read-ahead: off
<br>performance.quick-read: off
<br>performance.io-cache: off
<br>server.event-threads: 2
<br>client.event-threads: 3
<br>performance.client-io-threads: on
<br>performance.stat-prefetch: off
<br>dht.force-readdirp: off
<br>performance.force-readdirp: off
<br>network.remote-dio: on
<br>features.cache-invalidation: on
<br>performance.parallel-readdir: on
<br>performance.readdir-ahead: on<br>
<br></span><font face="arial, sans-serif">Xavi, do you think the open-behind off setting can help somehow? I did try to understand what it does (with no luck), and if it could impact the performance of my VMs (I've the setup you know so well ;))</font><div>I would like to avoid more crashings like this, version 10.3 of gluster was working since two weeks ago, quite well until this morning.</div></div></div></div></blockquote><div><br></div><div>I don't think disabling open-behind will have any visible effect on performance. Open-behind is only useful for small files when the workload is mostly open + read + close, and quick-read is also enabled (which is not your case). The only effect it will have is that the latency "saved" during open is "paid" on the next operation sent to the file, so the total overall latency should be the same. Additionally, VM workload doesn't open files frequently, so it shouldn't matter much in any case.</div><div><br></div><div>That said, I'm not sure if the problem is the same in your case. Based on the stack of the crash, it seems an issue inside the disperse module.</div><div><br></div><div>What OS are you using ? are you using official packages ? if so, which ones ?</div><div><br></div><div>Is it possible to provide a backtrace from gdb ?</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><br></div><div><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El vie, 19 mar 2021 a las 2:10, David Cunningham (<<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Xavi,</div><div><br></div><div>Thank you for that information. We'll look at upgrading it.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 12 Mar 2021 at 05:20, Xavi Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi David,</div><div><br></div><div>with so little information it's hard to tell, but given that there are several OPEN and UNLINK operations, it could be related to an already fixed bug (in recent versions) in open-behind.</div><div><br></div><div>You can try disabling open-behind with this command:</div><div><br></div><div> <font face="monospace"># gluster volume set <volname> open-behind off</font></div><div><font face="monospace"><br></font></div><div><font face="arial, sans-serif">But given the version you are using is very old and unmaintained, I would recommend you to upgrade to 8.x at least.</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Regards,</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Xavi</font></div><div><font face="arial, sans-serif"><br></font></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 10, 2021 at 5:10 AM David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hello,</div><div><br></div><div>We have a GlusterFS 5.13 server which also mounts itself with the native FUSE client. Recently the FUSE mount crashed and we found the following in the syslog. There isn't anything logged in mnt-glusterfs.log for that time. After killing all processes with a file handle open on the filesystem we were able to unmount and then remount the filesystem successfully.<br></div><div><br></div><div>Would anyone have advice on how to debug this crash? Thank you in advance!<br></div><div><br></div><div>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: pending frames:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 3355 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 6965 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 4095 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a><br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: signal received: 11<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: time of crash:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: 2021-03-09 03:12:31<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: configuration details:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: argp 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: backtrace 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: dlfcn 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: libpthread 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: llistxattr 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: setfsid 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: spinlock 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: epoll.h 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: xattr.h 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: st_atim.tv_nsec 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: package-string: glusterfs 5.13<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: ---------<br>...<br>Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Main process exited, code=killed, status=11/SEGV<br>Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Failed with result 'signal'.<br>...<br>Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Service hold-off time over, scheduling restart.<br>Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Scheduled restart job, restart counter is at 2.<br>Mar 9 05:13:54 voip1 systemd[1]: Stopped Mount glusterfs sharedstorage.<br>Mar 9 05:13:54 voip1 systemd[1]: Starting Mount glusterfs sharedstorage...<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: ERROR: Mount point does not exist<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a mount point<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage:<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8 /sbin/mount.glusterfs</div><div><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>
</blockquote></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>