<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office"><head><!--[if gte mso 9]><xml><o:OfficeDocumentSettings><o:AllowPNG/><o:PixelsPerInch>96</o:PixelsPerInch></o:OfficeDocumentSettings></xml><![endif]--></head><body>
Just check an existing mount unit and use that as reference. It’s not very reliable to use service to mount your mount points.<div><br></div><div>P.S.: All entries in /etc/fstab has a dynamically generated mount unit.If an fstab entry misses , the system fails to boot, yet a missing mount unit doesn’t have that effect.</div><div><br></div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov <br><br><br><div class="yahoo-signature"><a href="https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_">Sent from Yahoo Mail for iPhone</a><br></div><br><p class="yahoo-quoted-begin" style="font-size: 15px; color: #715FFA; padding-top: 15px; margin-top: 0">On Wednesday, June 7, 2023, 3:48 PM, Gilberto Ferreira <gilberto.nunes32@gmail.com> wrote:</p><blockquote class="iosymail"><div id="yiv3645914132"><div><div dir="ltr">Hi everybody<div><br clear="none"></div><div>Regarding the issue with mount, usually I am using this systemd service to bring up the mount points:</div><div><span style="font-family:monospace;"><span style="color:rgb(0,0,0);">/etc/systemd/system/glusterfsmounts.service</span><br clear="none"></span><div><span style="font-family:monospace;"><span style="color:rgb(0,0,0);">[Unit]
</span><br clear="none">Description=Glustermounting
<br clear="none">Requires=glusterd.service
<br clear="none">Wants=glusterd.service
<br clear="none">After=network.target network-online.target glusterd.service
<br clear="none">
<br clear="none">[Service]
<br clear="none">Type=simple
<br clear="none">RemainAfterExit=true
<br clear="none">ExecStartPre=/usr/sbin/gluster volume list
<br clear="none">ExecStart=/bin/mount -a -t glusterfs
<br clear="none">TimeoutSec=600
<br clear="none">SuccessExitStatus=15
<br clear="none">Restart=on-failure
<br clear="none">RestartSec=60
<br clear="none">StartLimitBurst=6
<br clear="none">StartLimitInterval=3600
<br clear="none">
<br clear="none">[Install]
<br clear="none">WantedBy=multi-user.target<br clear="none">
<br clear="none"></span><div><div dir="ltr" class="yiv3645914132gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>After create it remember to reload the systemd daemon like:<br clear="none">systemctl enable glusterfsmounts.service</div><div>systemctl demon-reload</div><div><br clear="none"></div><div>Also, I am using /etc/fstab to mount the glusterfs mount point properly, since the Proxmox GUI seems to me a little broken in this regards<br clear="none"><span style="font-family:monospace;"><span style="color:rgb(0,0,0);">gluster1:VMS1 /vms1 glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0</span><br clear="none"></span></div><div><br clear="none"></div><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px;">(47) 99676-7530 - Whatsapp / Telegram</span><br clear="none"></div><div><p style="font-size:12.8px;margin:0px;"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div><br clear="none"></div></div></div><br clear="none"><div class="yiv3645914132gmail_quote"><div dir="ltr" class="yiv3645914132gmail_attr">Em qua., 7 de jun. de 2023 às 01:51, Strahil Nikolov <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" target="_blank" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" class="yiv3645914132gmail_quote">Hi Chris,<div><br clear="none"></div><div>here is a link to the settings needed for VM storage: <a rel="nofollow noopener noreferrer" shape="rect" id="yiv3645914132m_-6160229704916755006linkextractor__1686113221340" target="_blank" href="https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4">https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4</a></div><div><br clear="none"></div><div>You can also ask in ovirt-users for real-world settings.Test well before changing production!!!</div><div><br clear="none"></div><div>IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!</div><div><br clear="none"></div><div>Best Regards,</div><div>Strahil Nikolov </div><div> <br clear="none"> <blockquote style="margin:0px 0px 20px;"> <div style="font-family:Roboto, sans-serif;color:rgb(109,0,246);"> <div>On Mon, Jun 5, 2023 at 13:55, Christian Schoepplein</div><div><<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:christian.schoepplein@linova.de" target="_blank" href="mailto:christian.schoepplein@linova.de">christian.schoepplein@linova.de</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246);"> Hi,<br clear="none"><br clear="none">we'd like to use glusterfs for Proxmox and virtual machines with qcow2 <br clear="none">disk images. We have a three node glusterfs setup with one volume and <br clear="none">Proxmox is attached and VMs are created, but after some time, and I think <br clear="none">after much i/o is going on for a VM, the data inside the virtual machine <br clear="none">gets corrupted. When I copy files from or to our glusterfs <br clear="none">directly everything is OK, I've checked the files with md5sum. So in general <br clear="none">our glusterfs setup seems to be OK I think..., but with the VMs and the self <br clear="none">growing qcow2 images there are problems. If I use raw images for the VMs <br clear="none">tests look better, but I need to do more testing to be sure, the problem is <br clear="none">a bit hard to reproduce :-(.<br clear="none"><br clear="none">I've also asked on a Proxmox mailinglist, but got no helpfull response so <br clear="none">far :-(. So maybe you have any helping hint what might be wrong with our <br clear="none">setup, what needs to be configured to use glusterfs as a storage backend for <br clear="none">virtual machines with self growing disk images. e.g. Any helpfull tip would <br clear="none">be great, because I am absolutely no glusterfs expert and also not a expert <br clear="none">for virtualization and what has to be done to let all components play well <br clear="none">together... Thanks for your support!<br clear="none"><br clear="none">Here some infos about our glusterfs setup, please let me know if you need <br clear="none">more infos. We are using Ubuntu 22.04 as operating system:<br clear="none"><br clear="none">root@gluster1:~# gluster --version<br clear="none">glusterfs 10.1<br clear="none">Repository revision: git://<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://git.gluster.org/glusterfs.git">git.gluster.org/glusterfs.git</a><br clear="none">Copyright (c) 2006-2016 Red Hat, Inc. <<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://www.gluster.org/">https://www.gluster.org/</a>><br clear="none">GlusterFS comes with ABSOLUTELY NO WARRANTY.<br clear="none">It is licensed to you under your choice of the GNU Lesser<br clear="none">General Public License, version 3 or any later version (LGPLv3<br clear="none">or later), or the GNU General Public License, version 2 (GPLv2),<br clear="none">in all cases as published by the Free Software Foundation.<br clear="none">root@gluster1:~#<br clear="none"><br clear="none">root@gluster1:~# gluster v status gfs_vms<br clear="none"><br clear="none">Status of volume: gfs_vms<br clear="none">Gluster process                             TCP Port  RDMA Port  Online  Pid<br clear="none">------------------------------------------------------------------------------<br clear="none">Brick gluster1.linova.de:/glusterfs/sde1enc<br clear="none">/brick                                      58448     0          Y       1062218<br clear="none">Brick gluster2.linova.de:/glusterfs/sdc1enc<br clear="none">/brick                                      50254     0          Y       20596<br clear="none">Brick gluster3.linova.de:/glusterfs/sdc1enc<br clear="none">/brick                                      52840     0          Y       1627513<br clear="none">Brick gluster1.linova.de:/glusterfs/sdf1enc<br clear="none">/brick                                      49832     0          Y       1062227<br clear="none">Brick gluster2.linova.de:/glusterfs/sdd1enc<br clear="none">/brick                                      56095     0          Y       20612<br clear="none">Brick gluster3.linova.de:/glusterfs/sdd1enc<br clear="none">/brick                                      51252     0          Y       1627521<br clear="none">Brick gluster1.linova.de:/glusterfs/sdg1enc<br clear="none">/brick                                      54991     0          Y       1062230<br clear="none">Brick gluster2.linova.de:/glusterfs/sde1enc<br clear="none">/brick                                      60812     0          Y       20628<br clear="none">Brick gluster3.linova.de:/glusterfs/sde1enc<br clear="none">/brick                                      59254     0          Y       1627522<br clear="none">Self-heal Daemon on localhost               N/A       N/A        Y       1062249<br clear="none">Bitrot Daemon on localhost                  N/A       N/A        Y       3591335<br clear="none">Scrubber Daemon on localhost                N/A       N/A        Y       3591346<br clear="none">Self-heal Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster2.linova.de">gluster2.linova.de</a>      N/A       N/A        Y       20645<br clear="none">Bitrot Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster2.linova.de">gluster2.linova.de</a>         N/A       N/A        Y       987517<br clear="none">Scrubber Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster2.linova.de">gluster2.linova.de</a>       N/A       N/A        Y       987588<br clear="none">Self-heal Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster3.linova.de">gluster3.linova.de</a>      N/A       N/A        Y       1627568<br clear="none">Bitrot Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster3.linova.de">gluster3.linova.de</a>         N/A       N/A        Y       1627543<br clear="none">Scrubber Daemon on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster3.linova.de">gluster3.linova.de</a>       N/A       N/A        Y       1627554<br clear="none"> <br clear="none">Task Status of Volume gfs_vms<br clear="none">------------------------------------------------------------------------------<br clear="none">There are no active volume tasks<br clear="none"> <br clear="none">root@gluster1:~#<br clear="none"><br clear="none">root@gluster1:~# gluster v status gfs_vms detail<br clear="none"><br clear="none">Status of volume: gfs_vms<br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster1.linova.de:/glusterfs/sde1enc/brick<br clear="none">TCP Port             : 58448               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1062218             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sde1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.6TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699660           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster2.linova.de:/glusterfs/sdc1enc/brick<br clear="none">TCP Port             : 50254               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 20596               <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdc1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.6TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699660           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster3.linova.de:/glusterfs/sdc1enc/brick<br clear="none">TCP Port             : 52840               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1627513             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdc1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.6TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699673           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster1.linova.de:/glusterfs/sdf1enc/brick<br clear="none">TCP Port             : 49832               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1062227             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdf1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.4TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699632           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster2.linova.de:/glusterfs/sdd1enc/brick<br clear="none">TCP Port             : 56095               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 20612               <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdd1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.4TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699632           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster3.linova.de:/glusterfs/sdd1enc/brick<br clear="none">TCP Port             : 51252               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1627521             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdd1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.4TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699658           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster1.linova.de:/glusterfs/sdg1enc/brick<br clear="none">TCP Port             : 54991               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1062230             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sdg1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.5TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699629           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster2.linova.de:/glusterfs/sde1enc/brick<br clear="none">TCP Port             : 60812               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 20628               <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sde1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.5TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699629           <br clear="none">------------------------------------------------------------------------------<br clear="none">Brick                : Brick gluster3.linova.de:/glusterfs/sde1enc/brick<br clear="none">TCP Port             : 59254               <br clear="none">RDMA Port            : 0                   <br clear="none">Online               : Y                   <br clear="none">Pid                  : 1627522             <br clear="none">File System          : xfs                 <br clear="none">Device               : /dev/mapper/sde1enc <br clear="none">Mount Options        : rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota<br clear="none">Inode Size           : 512                 <br clear="none">Disk Space Free      : 3.5TB               <br clear="none">Total Disk Space     : 3.6TB               <br clear="none">Inode Count          : 390700096           <br clear="none">Free Inodes          : 390699652           <br clear="none"> <br clear="none">root@gluster1:~#<br clear="none"><br clear="none">root@gluster1:~# gluster v info gfs_vms<br clear="none"><br clear="none"> <br clear="none">Volume Name: gfs_vms<br clear="none">Type: Distributed-Replicate<br clear="none">Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05<br clear="none">Status: Started<br clear="none">Snapshot Count: 0<br clear="none">Number of Bricks: 3 x 3 = 9<br clear="none">Transport-type: tcp<br clear="none">Bricks:<br clear="none">Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick<br clear="none">Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick<br clear="none">Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick<br clear="none">Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick<br clear="none">Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick<br clear="none">Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick<br clear="none">Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick<br clear="none">Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick<br clear="none">Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick<br clear="none">Options Reconfigured:<br clear="none">features.scrub: Active<br clear="none">features.bitrot: on<br clear="none">cluster.granular-entry-heal: on<br clear="none">storage.fips-mode-rchecksum: on<br clear="none">transport.address-family: inet<br clear="none">nfs.disable: on<br clear="none">performance.client-io-threads: off<br clear="none"><br clear="none">root@gluster1:~#<br clear="none"><br clear="none">root@gluster1:~# gluster volume heal gms_vms<br clear="none">Launching heal operation to perform index self heal on volume gms_vms has <br clear="none">been unsuccessful:<br clear="none">Volume gms_vms does not exist<br clear="none">root@gluster1:~# gluster volume heal gfs_vms<br clear="none">Launching heal operation to perform index self heal on volume gfs_vms has <br clear="none">been successful<br clear="none">Use heal info commands to check status.<br clear="none">root@gluster1:~# gluster volume heal gfs_vms info<br clear="none">Brick gluster1.linova.de:/glusterfs/sde1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster2.linova.de:/glusterfs/sdc1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster3.linova.de:/glusterfs/sdc1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster1.linova.de:/glusterfs/sdf1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster2.linova.de:/glusterfs/sdd1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster3.linova.de:/glusterfs/sdd1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster1.linova.de:/glusterfs/sdg1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster2.linova.de:/glusterfs/sde1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick gluster3.linova.de:/glusterfs/sde1enc/brick<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">root@gluster1:~#<br clear="none"><br clear="none">This are the warnings and errors I've found in the logs on our three <br clear="none">servers...<br clear="none"><br clear="none">* Warnings on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster1.linova.de">gluster1.linova.de</a>:<br clear="none"><br clear="none">glusterd.log:[2023-05-31 23:56:00.032233 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 02:22:04.133256 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 02:44:00.046086 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 05:32:00.042698 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 08:18:00.040890 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 11:09:00.020843 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none">glusterd.log:[2023-06-01 13:55:00.319414 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none"><br clear="none">* Errors on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster1.linova.de">gluster1.linova.de</a>:<br clear="none"><br clear="none">glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID: 106525] [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management: Volume detail does not exist <br clear="none">glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID: 106289] [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed to build payload for operation 'Volume Status' <br clear="none">glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none"><br clear="none">* Warnings on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster2.linova.de">gluster2.linova.de</a>:<br clear="none"><br clear="none">[2023-05-31 20:26:37.975658 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f4ec1b5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f4ec1c02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none"><br clear="none">* Errors on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster2.linova.de">gluster2.linova.de</a>:<br clear="none"><br clear="none">[2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none"><br clear="none">* Warnings on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster3.linova.de">gluster3.linova.de</a>:<br clear="none"><br clear="none">[2023-05-31 22:26:44.245188 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-05-31 22:58:20.000849 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-06-01 01:26:19.990639 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-06-01 07:09:44.252654 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-06-01 07:36:49.803972 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-06-01 07:42:20.003401 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 4b0a8298-9284-4a24-8de0-f5c25aafb5c7 <br clear="none">[2023-06-01 08:43:55.561333 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by 7a63d6a0-feae-4349-b787-d0fc76b3db3a <br clear="none">[2023-06-01 13:07:04.152591 +0000] W [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf) [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525) [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by a410159b-12db-4cf7-bad5-c5c817679d1b <br clear="none"><br clear="none">* Errors on <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://gluster3.linova.de">gluster3.linova.de</a>:<br clear="none"><br clear="none">[2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none">[2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118] [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to acquire lock for gfs_vms <br clear="none"><br clear="none">Best regards and thanks again for any helpfull hint!<br clear="none"><br clear="none">  Chris<br clear="none">________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><div id="yiv3645914132yqtfd09010" class="yiv3645914132yqt9576151613"><br clear="none"> </div></div><div id="yiv3645914132yqtfd88097" class="yiv3645914132yqt9576151613"> </div></blockquote></div><div id="yiv3645914132yqtfd53296" class="yiv3645914132yqt9576151613">________<br clear="none">
<br clear="none">
<br clear="none">
<br clear="none">
Community Meeting Calendar:<br clear="none">
<br clear="none">
Schedule -<br clear="none">
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">
Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">
Gluster-users mailing list<br clear="none">
<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">
<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">
</div></blockquote></div><div id="yiv3645914132yqtfd34252" class="yiv3645914132yqt9576151613">
</div></div></div><blockquote></blockquote></blockquote></div>
</body></html>