<div dir="ltr">Hi everybody<div><br></div><div>Regarding the issue with mount, usually I am using this systemd service to bring up the mount points:</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">/etc/systemd/system/glusterfsmounts.service</span><br></span><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[Unit]
</span><br>Description=Glustermounting
<br>Requires=glusterd.service
<br>Wants=glusterd.service
<br>After=network.target network-online.target glusterd.service
<br>
<br>[Service]
<br>Type=simple
<br>RemainAfterExit=true
<br>ExecStartPre=/usr/sbin/gluster volume list
<br>ExecStart=/bin/mount -a -t glusterfs
<br>TimeoutSec=600
<br>SuccessExitStatus=15
<br>Restart=on-failure
<br>RestartSec=60
<br>StartLimitBurst=6
<br>StartLimitInterval=3600
<br>
<br>[Install]
<br>WantedBy=multi-user.target<br>
<br></span><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>After create it remember to reload the systemd daemon like:<br>systemctl enable glusterfsmounts.service</div><div>systemctl demon-reload</div><div><br></div><div>Also, I am using /etc/fstab to mount the glusterfs mount point properly, since the Proxmox GUI seems to me a little broken in this regards<br><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster1:VMS1 /vms1 glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0</span><br></span></div><div><br></div><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em qua., 7 de jun. de 2023 às 01:51, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Chris,<div><br></div><div>here is a link to the settings needed for VM storage: <a id="m_-6160229704916755006linkextractor__1686113221340" href="https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4" target="_blank">https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4</a></div><div><br></div><div>You can also ask in ovirt-users for real-world settings.Test well before changing production!!!</div><div><br></div><div>IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov </div><div> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Mon, Jun 5, 2023 at 13:55, Christian Schoepplein</div><div><<a href="mailto:christian.schoepplein@linova.de" target="_blank">christian.schoepplein@linova.de</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> Hi,<br><br>we'd like to use glusterfs for Proxmox and virtual machines with qcow2 <br>disk images. We have a three node glusterfs setup with one volume and <br>Proxmox is attached and VMs are created, but after some time, and I think <br>after much i/o is going on for a VM, the data inside the virtual machine <br>gets corrupted. When I copy files from or to our glusterfs <br>directly everything is OK, I've checked the files with md5sum. So in general <br>our glusterfs setup seems to be OK I think..., but with the VMs and the self <br>growing qcow2 images there are problems. If I use raw images for the VMs <br>tests look better, but I need to do more testing to be sure, the problem is <br>a bit hard to reproduce :-(.<br><br>I've also asked on a Proxmox mailinglist, but got no helpfull response so <br>far :-(. So maybe you have any helping hint what might be wrong with our <br>setup, what needs to be configured to use glusterfs as a storage backend for <br>virtual machines with self growing disk images. e.g. Any helpfull tip would <br>be great, because I am absolutely no glusterfs expert and also not a expert <br>for virtualization and what has to be done to let all components play well <br>together... Thanks for your support!<br><br>Here some infos about our glusterfs setup, please let me know if you need <br>more infos. We are using Ubuntu 22.04 as operating system:<br><br>root@gluster1:~# gluster --version<br>glusterfs 10.1<br>Repository revision: git://<a href="http://git.gluster.org/glusterfs.git" target="_blank">git.gluster.org/glusterfs.git</a><br>Copyright (c) 2006-2016 Red Hat, Inc. <<a href="https://www.gluster.org/" target="_blank">https://www.gluster.org/</a>><br>GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>It is licensed to you under your choice of the GNU Lesser<br>General Public License, version 3 or any later version (LGPLv3<br>or later), or the GNU General Public License, version 2 (GPLv2),<br>in all cases as published by the Free Software Foundation.<br>root@gluster1:~#<br><br>root@gluster1:~# gluster v status gfs_vms<br><br>Status of volume: gfs_vms<br>Gluster process TCP Port RDMA Port Online Pid<br>------------------------------------------------------------------------------<br>Brick gluster1.linova.de:/glusterfs/sde1enc<br>/brick 58448 0 Y 1062218<br>Brick gluster2.linova.de:/glusterfs/sdc1enc<br>/brick 50254 0 Y 20596<br>Brick gluster3.linova.de:/glusterfs/sdc1enc<br>/brick 52840 0 Y 1627513<br>Brick gluster1.linova.de:/glusterfs/sdf1enc<br>/brick 49832 0 Y 1062227<br>Brick gluster2.linova.de:/glusterfs/sdd1enc<br>/brick 56095 0 Y 20612<br>Brick gluster3.linova.de:/glusterfs/sdd1enc<br>/brick 51252 0 Y 1627521<br>Brick gluster1.linova.de:/glusterfs/sdg1enc<br>/brick 54991 0 Y 1062230<br>Brick gluster2.linova.de:/glusterfs/sde1enc<br>/brick 60812 0 Y 20628<br>Brick gluster3.linova.de:/glusterfs/sde1enc<br>/brick 59254 0 Y 1627522<br>Self-heal Daemon on localhost N/A N/A Y 1062249<br>Bitrot Daemon on localhost N/A N/A Y 3591335<br>Scrubber Daemon on localhost N/A N/A Y 3591346<br>Self-heal Daemon on <a href="http://gluster2.linova.de" target="_blank">gluster2.linova.de</a> N/A N/A Y 20645<br>Bitrot Daemon on <a href="http://gluster2.linova.de" target="_blank">gluster2.linova.de</a> N/A N/A Y 987517<br>Scrubber Daemon on <a href="http://gluster2.linova.de" target="_blank">gluster2.linova.de</a> N/A N/A Y 987588<br>Self-heal Daemon on <a href="http://gluster3.linova.de" target="_blank">gluster3.linova.de</a> N/A N/A Y 1627568<br>Bitrot Daemon on <a href="http://gluster3.linova.de" target="_blank">gluster3.linova.de</a> N/A N/A Y 1627543<br>Scrubber Daemon on <a href="http://gluster3.linova.de" target="_blank">gluster3.linova.de</a> N/A N/A Y 1627554<br> <br>Task Status of Volume gfs_vms<br>------------------------------------------------------------------------------<br>There are no active volume tasks<br> <br>root@gluster1:~#<br><br>root@gluster1:~# gluster v status gfs_vms detail<br><br>Status of volume: gfs_vms<br>------------------------------------------------------------------------------<br>Brick : Brick gluster1.linova.de:/glusterfs/sde1enc/brick<br>TCP Port : 58448 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1062218 <br>File System : xfs <br>Device : /dev/mapper/sde1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.6TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699660 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster2.linova.de:/glusterfs/sdc1enc/brick<br>TCP Port : 50254 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 20596 <br>File System : xfs <br>Device : /dev/mapper/sdc1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.6TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699660 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster3.linova.de:/glusterfs/sdc1enc/brick<br>TCP Port : 52840 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1627513 <br>File System : xfs <br>Device : /dev/mapper/sdc1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.6TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699673 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster1.linova.de:/glusterfs/sdf1enc/brick<br>TCP Port : 49832 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1062227 <br>File System : xfs <br>Device : /dev/mapper/sdf1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.4TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699632 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster2.linova.de:/glusterfs/sdd1enc/brick<br>TCP Port : 56095 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 20612 <br>File System : xfs <br>Device : /dev/mapper/sdd1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.4TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699632 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster3.linova.de:/glusterfs/sdd1enc/brick<br>TCP Port : 51252 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1627521 <br>File System : xfs <br>Device : /dev/mapper/sdd1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.4TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699658 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster1.linova.de:/glusterfs/sdg1enc/brick<br>TCP Port : 54991 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1062230 <br>File System : xfs <br>Device : /dev/mapper/sdg1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.5TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699629 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster2.linova.de:/glusterfs/sde1enc/brick<br>TCP Port : 60812 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 20628 <br>File System : xfs <br>Device : /dev/mapper/sde1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.5TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699629 <br>------------------------------------------------------------------------------<br>Brick : Brick gluster3.linova.de:/glusterfs/sde1enc/brick<br>TCP Port : 59254 <br>RDMA Port : 0 <br>Online : Y <br>Pid : 1627522 <br>File System : xfs <br>Device : /dev/mapper/sde1enc <br>Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br>Inode Size : 512 <br>Disk Space Free : 3.5TB <br>Total Disk Space : 3.6TB <br>Inode Count : 390700096 <br>Free Inodes : 390699652 <br> <br>root@gluster1:~#<br><br>root@gluster1:~# gluster v info gfs_vms<br><br> <br>Volume Name: gfs_vms<br>Type: Distributed-Replicate<br>Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x 3 = 9<br>Transport-type: tcp<br>Bricks:<br>Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick<br>Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick<br>Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick<br>Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick<br>Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick<br>Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick<br>Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick<br>Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick<br>Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick<br>Options Reconfigured:<br>features.scrub: Active<br>features.bitrot: on<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>nfs.disable: on<br>performance.client-io-threads: off<br><br>root@gluster1:~#<br><br>root@gluster1:~# gluster volume heal gms_vms<br>Launching heal operation to perform index self heal on volume gms_vms has <br>been unsuccessful:<br>Volume gms_vms does not exist<br>root@gluster1:~# gluster volume heal gfs_vms<br>Launching heal operation to perform index self heal on volume gfs_vms has <br>been successful<br>Use heal info commands to check status.<br>root@gluster1:~# gluster volume heal gfs_vms info<br>Brick gluster1.linova.de:/glusterfs/sde1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster2.linova.de:/glusterfs/sdc1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster3.linova.de:/glusterfs/sdc1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster1.linova.de:/glusterfs/sdf1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster2.linova.de:/glusterfs/sdd1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster3.linova.de:/glusterfs/sdd1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster1.linova.de:/glusterfs/sdg1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster2.linova.de:/glusterfs/sde1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>Brick gluster3.linova.de:/glusterfs/sde1enc/brick<br>Status: Connected<br>Number of entries: 0<br><br>root@gluster1:~#<br><br>This are the warnings and errors I've found in the logs on our three <br>servers...<br><br>* Warnings on <a href="http://gluster1.linova.de" target="_blank">gluster1.linova.de</a>:<br><br>glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 02:22:04.133256 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 02:44:00.046086 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 05:32:00.042698 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 08:18:00.040890 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 11:09:00.020843 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br>glusterd.log:[2023-06-01 13:55:00.319414 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br><br>* Errors on <a href="http://gluster1.linova.de" target="_blank">gluster1.linova.de</a>:<br><br>glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID: 106525] [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management: Volume detail does not exist <br>glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID: 106289] [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed to build payload for operation 'Volume Status' <br>glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br><br>* Warnings on <a href="http://gluster2.linova.de" target="_blank">gluster2.linova.de</a>:<br><br>[2023-05-31 20:26:37.975658 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f4ec1b5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f4ec1c02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br><br>* Errors on <a href="http://gluster2.linova.de" target="_blank">gluster2.linova.de</a>:<br><br>[2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br><br>* Warnings on <a href="http://gluster3.linova.de" target="_blank">gluster3.linova.de</a>:<br><br>[2023-05-31 22:26:44.245188 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-05-31 22:58:20.000849 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-06-01 01:26:19.990639 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-06-01 07:09:44.252654 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-06-01 07:36:49.803972 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-06-01 07:42:20.003401 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br>[2023-06-01 08:43:55.561333 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 7a63d6a0-feae-4349-b787-d0fc76b3db3a <br>[2023-06-01 13:07:04.152591 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br><br>* Errors on <a href="http://gluster3.linova.de" target="_blank">gluster3.linova.de</a>:<br><br>[2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br>[2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br><br>Best regards and thanks again for any helpfull hint!<br><br> Chris<br>________<br><br><br><br>Community Meeting Calendar:<br><br>Schedule -<br>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br> </div> </blockquote></div>________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>