<div dir="ltr">What is "loop0" it seems it's having some issue. Does it point to a Gluster file ?<div><br></div><div>I also see that there's an io_uring thread in D state. If that one belongs to Gluster, it may explain why systemd was unable to generate a core dump (all threads need to be stopped to generate a core dump, but a thread blocked inside the kernel cannot be stopped).</div><div><br></div><div>If you are using io_uring in Gluster, maybe you can disable it to see if it's related.</div><div><br></div><div>Xavi</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Nov 25, 2022 at 11:39 AM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Well, just happened again, the same server, the same mountpoint.<div><br></div><div>I'm unable to get the core dumps, coredumpctl says there are no core dumps, it would be funny if I wasn't the one suffering it, but systemd-coredump service crashed as well</div><div><span style="font-family:monospace"><span style="font-weight:bold;color:rgb(255,84,84)">●</span><span style="color:rgb(0,0,0)"> systemd-coredump@0-3199871-0.service - Process Core Dump (PID 3199871/UID 0)
</span><br> Loaded: loaded (/lib/systemd/system/systemd-coredump@.service; static)
<br> Active: <span style="font-weight:bold;color:rgb(255,84,84)">failed</span><span style="color:rgb(0,0,0)"> (Result: timeout) since Fri 2022-11-25 10:54:59 CET; 39min ago
</span><br>TriggeredBy: <span style="font-weight:bold;color:rgb(84,255,84)">●</span><span style="color:rgb(0,0,0)"> systemd-coredump.socket
</span><br> Docs: man:systemd-coredump(8)
<br> Process: 3199873 ExecStart=/lib/systemd/systemd-coredump (code=killed, signal=TERM)
<br> Main PID: 3199873 (code=killed, signal=TERM)
<br> CPU: 15ms
<br>
<br>Nov 25 10:49:59 pve02 systemd[1]: Started Process Core Dump (PID 3199871/UID 0).
<br>Nov 25 10:54:59 pve02 systemd[1]: <span style="font-weight:bold;color:rgb(215,215,95)">systemd-coredump@0-3199871-0.service: Service reached runtime time limit. Stopping.</span><span style="color:rgb(0,0,0)">
</span><br>Nov 25 10:54:59 pve02 systemd[1]: <span style="font-weight:bold;color:rgb(215,215,95)">systemd-coredump@0-3199871-0.service: Failed with result 'timeout'.</span><br><span style="color:rgb(0,0,0)">
</span><br></span></div><div><br></div><div>I just saw the exception on dmesg, </div><div><span style="font-family:monospace;color:rgb(0,0,0)">[2022-11-25 10:50:08] INFO: task kmmpd-loop0:681644 blocked for more than 120 seconds.
</span><br><font face="monospace">[2022-11-25 10:50:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:50:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:50:08] task:kmmpd-loop0 state:D stack: 0 pid:681644 ppid: 2 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:50:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:50:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:50:08] ? bit_wait+0x70/0x70
</font><br><font face="monospace">[2022-11-25 10:50:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_schedule+0x46/0x80
</font><br><font face="monospace">[2022-11-25 10:50:08] bit_wait_io+0x11/0x70
</font><br><font face="monospace">[2022-11-25 10:50:08] __wait_on_bit+0x31/0xa0
</font><br><font face="monospace">[2022-11-25 10:50:08] out_of_line_wait_on_bit+0x8d/0xb0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? var_wake_function+0x30/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] __wait_on_buffer+0x34/0x40
</font><br><font face="monospace">[2022-11-25 10:50:08] write_mmp_block+0x127/0x180
</font><br><font face="monospace">[2022-11-25 10:50:08] kmmpd+0x1b9/0x430
</font><br><font face="monospace">[2022-11-25 10:50:08] ? write_mmp_block+0x180/0x180
</font><br><font face="monospace">[2022-11-25 10:50:08] kthread+0x127/0x150
</font><br><font face="monospace">[2022-11-25 10:50:08] ? set_kthread_struct+0x50/0x50
</font><br><font face="monospace">[2022-11-25 10:50:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] INFO: task </font>iou-wrk-1511979<font face="monospace">:3200401 blocked for more than 120 seconds.
</font><br><font face="monospace">[2022-11-25 10:50:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:50:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:50:08] task:</font>iou-wrk-1511979<font face="monospace"> state:D stack: 0 pid:3200401 ppid: 1 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:50:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:50:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:50:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:50:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:50:08] rwsem_down_write_slowpath+0x231/0x4f0
</font><br><font face="monospace">[2022-11-25 10:50:08] down_write+0x47/0x60
</font><br><font face="monospace">[2022-11-25 10:50:08] fuse_file_write_iter+0x1a3/0x430
</font><br><font face="monospace">[2022-11-25 10:50:08] ? apparmor_file_permission+0x70/0x170
</font><br><font face="monospace">[2022-11-25 10:50:08] io_write+0xfb/0x320
</font><br><font face="monospace">[2022-11-25 10:50:08] ? put_dec+0x1c/0xa0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_issue_sqe+0x401/0x1fc0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_wq_submit_work+0x76/0xd0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_worker_handle_work+0x1a7/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] io_wqe_worker+0x2c0/0x360
</font><br><font face="monospace">[2022-11-25 10:50:08] ? finish_task_switch.isra.0+0x7e/0x2b0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:50:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:50:08] RIP: 0033:0x0
</font><br><font face="monospace">[2022-11-25 10:50:08] RSP: 002b:0000000000000000 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
</font><br><font face="monospace">[2022-11-25 10:50:08] RAX: 0000000000000000 RBX: 00007fdb1efef640 RCX: 00007fdd59f872e9
</font><br><font face="monospace">[2022-11-25 10:50:08] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000011
</font><br><font face="monospace">[2022-11-25 10:50:08] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
</font><br><font face="monospace">[2022-11-25 10:50:08] R10: 0000000000000000 R11: 0000000000000216 R12: 000055662e5bd268
</font><br><font face="monospace">[2022-11-25 10:50:08] R13: 000055662e5bd320 R14: 000055662e5bd260 R15: 0000000000000000
</font><br><font face="monospace">[2022-11-25 10:50:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] INFO: task kmmpd-loop0:681644 blocked for more than 241 seconds.
</font><br><font face="monospace">[2022-11-25 10:52:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:52:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:52:08] task:kmmpd-loop0 state:D stack: 0 pid:681644 ppid: 2 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:52:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:52:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:52:08] ? bit_wait+0x70/0x70
</font><br><font face="monospace">[2022-11-25 10:52:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_schedule+0x46/0x80
</font><br><font face="monospace">[2022-11-25 10:52:08] bit_wait_io+0x11/0x70
</font><br><font face="monospace">[2022-11-25 10:52:08] __wait_on_bit+0x31/0xa0
</font><br><font face="monospace">[2022-11-25 10:52:08] out_of_line_wait_on_bit+0x8d/0xb0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? var_wake_function+0x30/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] __wait_on_buffer+0x34/0x40
</font><br><font face="monospace">[2022-11-25 10:52:08] write_mmp_block+0x127/0x180
</font><br><font face="monospace">[2022-11-25 10:52:08] kmmpd+0x1b9/0x430
</font><br><font face="monospace">[2022-11-25 10:52:08] ? write_mmp_block+0x180/0x180
</font><br><font face="monospace">[2022-11-25 10:52:08] kthread+0x127/0x150
</font><br><font face="monospace">[2022-11-25 10:52:08] ? set_kthread_struct+0x50/0x50
</font><br><font face="monospace">[2022-11-25 10:52:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] INFO: task </font>iou-wrk-1511979<font face="monospace">:3200401 blocked for more than 241 seconds.
</font><br><font face="monospace">[2022-11-25 10:52:08] Tainted: P IO 5.15.60-2-pve #1
</font><br><font face="monospace">[2022-11-25 10:52:08] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
</font><br><font face="monospace">[2022-11-25 10:52:08] task:</font>iou-wrk-1511979<font face="monospace"> state:D stack: 0 pid:3200401 ppid: 1 flags:0x00004000
</font><br><font face="monospace">[2022-11-25 10:52:08] Call Trace:
</font><br><font face="monospace">[2022-11-25 10:52:08] <TASK>
</font><br><font face="monospace">[2022-11-25 10:52:08] __schedule+0x33d/0x1750
</font><br><font face="monospace">[2022-11-25 10:52:08] schedule+0x4e/0xc0
</font><br><font face="monospace">[2022-11-25 10:52:08] rwsem_down_write_slowpath+0x231/0x4f0
</font><br><font face="monospace">[2022-11-25 10:52:08] down_write+0x47/0x60
</font><br><font face="monospace">[2022-11-25 10:52:08] fuse_file_write_iter+0x1a3/0x430
</font><br><font face="monospace">[2022-11-25 10:52:08] ? apparmor_file_permission+0x70/0x170
</font><br><font face="monospace">[2022-11-25 10:52:08] io_write+0xfb/0x320
</font><br><font face="monospace">[2022-11-25 10:52:08] ? put_dec+0x1c/0xa0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_issue_sqe+0x401/0x1fc0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_wq_submit_work+0x76/0xd0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_worker_handle_work+0x1a7/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] io_wqe_worker+0x2c0/0x360
</font><br><font face="monospace">[2022-11-25 10:52:08] ? finish_task_switch.isra.0+0x7e/0x2b0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] ? io_worker_handle_work+0x5f0/0x5f0
</font><br><font face="monospace">[2022-11-25 10:52:08] ret_from_fork+0x1f/0x30
</font><br><font face="monospace">[2022-11-25 10:52:08] RIP: 0033:0x0
</font><br><font face="monospace">[2022-11-25 10:52:08] RSP: 002b:0000000000000000 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
</font><br><font face="monospace">[2022-11-25 10:52:08] RAX: 0000000000000000 RBX: 00007fdb1efef640 RCX: 00007fdd59f872e9
</font><br><font face="monospace">[2022-11-25 10:52:08] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000011
</font><br><font face="monospace">[2022-11-25 10:52:08] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
</font><br><font face="monospace">[2022-11-25 10:52:08] R10: 0000000000000000 R11: 0000000000000216 R12: 000055662e5bd268
</font><br><font face="monospace">[2022-11-25 10:52:08] R13: 000055662e5bd320 R14: 000055662e5bd260 R15: 0000000000000000
</font><br><font face="monospace">[2022-11-25 10:52:08] </TASK>
</font><br><font face="monospace">[2022-11-25 10:52:12] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:12] print_req_error: 7 callbacks suppressed
</font><br><font face="monospace">[2022-11-25 10:52:12] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:12] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:12] EXT4-fs error (device loop0): kmmpd:179: comm kmmpd-loop0: Error writing to MMP block
</font><br><font face="monospace">[2022-11-25 10:52:12] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:12] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:12] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4490452992, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4490457088, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8770416 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8770424 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Aborting journal on device loop0-8.
</font><br><font face="monospace">[2022-11-25 10:52:18] loop: Write error at byte offset 4429185024, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8650752 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] blk_update_request: I/O error, dev loop0, sector 8650752 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:18] Buffer I/O error on dev loop0, logical block 1081344, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:18] JBD2: Error -5 detected when updating journal superblock for loop0-8.
</font><br><font face="monospace">[2022-11-25 10:52:23] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:23] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:23] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:28] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:28] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:28] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:33] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:33] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:33] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:38] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:38] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:38] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:43] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:43] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:43] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:48] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:48] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:48] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:53] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:53] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:53] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:52:59] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:52:59] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:52:59] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:04] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:04] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:04] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:09] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:09] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:09] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:14] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:14] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:14] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:19] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:19] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:19] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:24] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:24] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:24] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:29] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:29] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:29] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:34] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:34] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:34] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:40] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:40] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:40] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:45] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:45] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:45] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:50] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:50] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:50] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:53:55] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:53:55] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:53:55] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:00] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:00] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:00] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:05] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:05] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:05] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:10] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:10] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:10] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:15] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:15] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:15] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:21] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:21] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:21] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:26] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:26] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:26] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:31] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:31] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:31] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:36] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:36] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:36] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:41] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:41] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:41] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:46] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:46] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:46] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:51] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:51] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:51] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:54:56] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:54:56] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:54:56] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:01] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:01] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:01] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs error (device loop0): ext4_journal_check_start:83: comm burp: Detected aborted journal
</font><br><font face="monospace">[2022-11-25 10:55:04] loop: Write error at byte offset 0, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:04] blk_update_request: I/O error, dev loop0, sector 0 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:04] blk_update_request: I/O error, dev loop0, sector 0 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:04] Buffer I/O error on dev loop0, logical block 0, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs (loop0): I/O error while writing superblock
</font><br><font face="monospace">[2022-11-25 10:55:04] EXT4-fs (loop0): Remounting filesystem read-only
</font><br><font face="monospace">[2022-11-25 10:55:07] loop: Write error at byte offset 37908480, length 4096.
</font><br><font face="monospace">[2022-11-25 10:55:07] blk_update_request: I/O error, dev loop0, sector 74040 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 0
</font><br><font face="monospace">[2022-11-25 10:55:07] Buffer I/O error on dev loop0, logical block 9255, lost sync page write
</font><br><font face="monospace">[2022-11-25 10:57:14] blk_update_request: I/O error, dev loop0, sector 16390368 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
</font><br><font face="monospace">[2022-11-25 11:03:45] device tap136i0 entered promiscuous mode
</font><br><br><font face="arial, sans-serif">I don't know if it is relevant somehow or it is unrelated to glusterfs, but the consequences are the mountpoint crashes, I'm forced to lazy unmount it and remount it back. Then restart all the VMs on there, unfortunately, this time several have the hard disk corrupted and now I'm restoring them from the backup.</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Any tip?<br></font><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El mar, 22 nov 2022 a las 12:31, Angel Docampo (<<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I've taken a look into all possible places they should be, and I couldn't find it anywhere. Some people say the dump file is generated where the application is running... well, I don't know where to look then, and I hope they hadn't been generated on the failed mountpoint.<div><br><div>As Debian 11 has systemd, I've installed systemd-coredump, so in the case a new crash happens, at least I will have the exact location and tool (coredumpctl) to find them and will install then the debug symbols, which is particularly tricky on debian. But I need to wait to happen again, now the tool says there isn't any core dump on the system.</div><div><br></div><div>Thank you, Xavi, if this happens again (let's hope it won't), I will report back.</div><div><br></div><div>Best regards!<br clear="all"><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font face="tahoma, sans-serif" size="4"><b><br></b></font></div><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El mar, 22 nov 2022 a las 10:45, Xavi Hernandez (<<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">The crash seems related to some problem in ec xlator, but I don't have enough information to determine what it is. The crash should have generated a core dump somewhere in the system (I don't know where Debian keeps the core dumps). If you find it, you should be able to open it using this command (make sure debug symbols package is also installed before running it):<div><br></div><div> # gdb /usr/sbin/glusterfs <path to core dump></div><div><br></div><div>And then run this command:</div><div><br></div><div> # bt -full</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 22, 2022 at 9:41 AM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Xavi, <div><br></div><div>The OS is Debian 11 with the proxmox kernel. Gluster packages are the official from <a href="http://gluster.org" target="_blank">gluster.org</a> (<a href="https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/" target="_blank">https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/</a>)</div><div><div><br></div><div>The system logs showed no other issues by the time of the crash, no OOM kill or whatsoever, and no other process was interacting with the gluster mountpoint besides proxmox.</div></div><div><br></div><div>I wasn't running gdb when it crashed, so I don't really know if I can obtain a more detailed trace from logs or if there is a simple way to let it running in the background to see if it happens again (or there is a flag to start the systemd daemon in debug mode).</div><div><br></div><div>Best, </div><div><br><div><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El lun, 21 nov 2022 a las 15:16, Xavi Hernandez (<<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Angel,</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo <<a href="mailto:angel.docampo@eoniantec.com" target="_blank">angel.docampo@eoniantec.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Sorry for necrobumping this, but this morning I've suffered this on my Proxmox + GlusterFS cluster. In the log I can see this<div><br><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[2022-11-21 07:38:00.213620 +0000] I [MSGID: 133017] [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on fbc063cb-874e-475d-b585-f89</span><br>f7518acdd. [Operation not supported]
<br><span style="color:rgb(255,255,255);background-color:rgb(0,0,0)">pending frames</span><span style="color:rgb(0,0,0)">:
</span><br>frame : type(1) op(WRITE)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)
<br>frame : type(0) op(0)<br>
...</span></div><div><span style="color:rgb(0,0,0);font-family:monospace">frame : type(1) op(FSYNC)</span><br></div><div><span style="font-family:monospace">frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)
<br>frame : type(1) op(FSYNC)</span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a>
</span><br>signal received: 11
<br>time of crash: <br>2022-11-21 07:38:00 +0000
<br>configuration details:
<br>argp 1
<br>backtrace 1
<br>dlfcn 1
<br>libpthread 1
<br>llistxattr 1
<br>setfsid 1
<br>epoll.h 1
<br>xattr.h 1
<br>st_atim.tv_nsec 1
<br>package-string: glusterfs 10.3
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]
<br>/lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
<br>/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]
<br>/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f74ee16638c]
<br>/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f74f28bc71d]
<br>/lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f74f27d2ea7]
<br>/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f74f26f2aef]
<br>---------<br>
</span><font face="arial, sans-serif">The mount point wasn't accessible with the "Tr<span style="color:rgb(0,0,0)">ansport endpoint is not connected" message and it was shown like this.</span><br></font></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">d????????? ? ? ? ? ? </span><span style="font-weight:bold;color:rgb(84,84,255)">vmdata</span><br><span style="color:rgb(0,0,0)">
</span><br></span><font face="arial, sans-serif">I had to stop all the VMs on that proxmox node, then stop the gluster daemon to ummount de directory, and after starting the daemon and re-mounting, all was working again.</font></div><div><span style="font-family:monospace"><br></span></div><div><span style="font-family:monospace">My gluster volume info returns this</span></div><div><span style="font-family:monospace"> <br>Volume Name: vmdata
<br>Type: Distributed-Disperse
<br>Volume ID: cace5aa4-b13a-4750-8736-aa179c2485e1
<br>Status: Started
<br>Snapshot Count: 0
<br>Number of Bricks: 2 x (2 + 1) = 6
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: g01:/data/brick1/brick
<br>Brick2: g02:/data/brick2/brick
<br>Brick3: g03:/data/brick1/brick
<br>Brick4: g01:/data/brick2/brick
<br>Brick5: g02:/data/brick1/brick
<br>Brick6: g03:/data/brick2/brick
<br>Options Reconfigured:
<br>nfs.disable: on
<br>transport.address-family: inet
<br>storage.fips-mode-rchecksum: on
<br>features.shard: enable
<br>features.shard-block-size: 256MB
<br>performance.read-ahead: off
<br>performance.quick-read: off
<br>performance.io-cache: off
<br>server.event-threads: 2
<br>client.event-threads: 3
<br>performance.client-io-threads: on
<br>performance.stat-prefetch: off
<br>dht.force-readdirp: off
<br>performance.force-readdirp: off
<br>network.remote-dio: on
<br>features.cache-invalidation: on
<br>performance.parallel-readdir: on
<br>performance.readdir-ahead: on<br>
<br></span><font face="arial, sans-serif">Xavi, do you think the open-behind off setting can help somehow? I did try to understand what it does (with no luck), and if it could impact the performance of my VMs (I've the setup you know so well ;))</font><div>I would like to avoid more crashings like this, version 10.3 of gluster was working since two weeks ago, quite well until this morning.</div></div></div></div></blockquote><div><br></div><div>I don't think disabling open-behind will have any visible effect on performance. Open-behind is only useful for small files when the workload is mostly open + read + close, and quick-read is also enabled (which is not your case). The only effect it will have is that the latency "saved" during open is "paid" on the next operation sent to the file, so the total overall latency should be the same. Additionally, VM workload doesn't open files frequently, so it shouldn't matter much in any case.</div><div><br></div><div>That said, I'm not sure if the problem is the same in your case. Based on the stack of the crash, it seems an issue inside the disperse module.</div><div><br></div><div>What OS are you using ? are you using official packages ? if so, which ones ?</div><div><br></div><div>Is it possible to provide a backtrace from gdb ?</div><div><br></div><div>Regards,</div><div><br></div><div>Xavi</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><br></div><div><div><div dir="ltr"><div dir="ltr"><div style="color:rgb(34,34,34)"><font size="4" face="arial, sans-serif"><b>Angel Docampo</b></font></div><div style="color:rgb(34,34,34)"><a href="https://www.google.com/maps/place/Edificio+de+Oficinas+Euro+3/@41.3755943,2.0730134,17z/data=!3m2!4b1!5s0x12a4997021aad323:0x3e06bf8ae6d68351!4m5!3m4!1s0x12a4997a67bf592f:0x83c2323a9cc2aa4b!8m2!3d41.3755903!4d2.0752021" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yfwAc1Ml7oXFmQS6cJWaMeVnZ7xmAkBZPyODZAB9R8us12sFWd19cHxqDJ7CRF-UcvfKFLJNg"></a> <a href="mailto:angel.docampo@eoniantec.com" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4xhLmETvCmyOlze-bvuD8EJDZ0KgPmtCKnW0ObWzrqFda6zykLG06WgSatNHY2tgyMj_FOg3RY"></a> <a href="tel:+34-93-1592929" target="_blank"><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4wKRh91a3Q-nUQnp1zQ-4rrdeN4FKksw-kDAAzCOg9hOTqSiqNmU2AloNPHrS-QwtOWiFHYHl0"></a></div></div></div></div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El vie, 19 mar 2021 a las 2:10, David Cunningham (<<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Xavi,</div><div><br></div><div>Thank you for that information. We'll look at upgrading it.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 12 Mar 2021 at 05:20, Xavi Hernandez <<a href="mailto:jahernan@redhat.com" target="_blank">jahernan@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi David,</div><div><br></div><div>with so little information it's hard to tell, but given that there are several OPEN and UNLINK operations, it could be related to an already fixed bug (in recent versions) in open-behind.</div><div><br></div><div>You can try disabling open-behind with this command:</div><div><br></div><div> <font face="monospace"># gluster volume set <volname> open-behind off</font></div><div><font face="monospace"><br></font></div><div><font face="arial, sans-serif">But given the version you are using is very old and unmaintained, I would recommend you to upgrade to 8.x at least.</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Regards,</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Xavi</font></div><div><font face="arial, sans-serif"><br></font></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 10, 2021 at 5:10 AM David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hello,</div><div><br></div><div>We have a GlusterFS 5.13 server which also mounts itself with the native FUSE client. Recently the FUSE mount crashed and we found the following in the syslog. There isn't anything logged in mnt-glusterfs.log for that time. After killing all processes with a file handle open on the filesystem we were able to unmount and then remount the filesystem successfully.<br></div><div><br></div><div>Would anyone have advice on how to debug this crash? Thank you in advance!<br></div><div><br></div><div>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: pending frames:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 3355 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 6965 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 4095 times: [ frame : type(1) op(OPEN)]<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: patchset: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a><br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: signal received: 11<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: time of crash:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: 2021-03-09 03:12:31<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: configuration details:<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: argp 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: backtrace 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: dlfcn 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: libpthread 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: llistxattr 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: setfsid 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: spinlock 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: epoll.h 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: xattr.h 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: st_atim.tv_nsec 1<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: package-string: glusterfs 5.13<br>Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: ---------<br>...<br>Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Main process exited, code=killed, status=11/SEGV<br>Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Failed with result 'signal'.<br>...<br>Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Service hold-off time over, scheduling restart.<br>Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Scheduled restart job, restart counter is at 2.<br>Mar 9 05:13:54 voip1 systemd[1]: Stopped Mount glusterfs sharedstorage.<br>Mar 9 05:13:54 voip1 systemd[1]: Starting Mount glusterfs sharedstorage...<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: ERROR: Mount point does not exist<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a mount point<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage:<br>Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8 /sbin/mount.glusterfs</div><div><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>
</blockquote></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>