<div dir="ltr">Hi there.<br>I don't know if you are using 2node glusterfs solution, but here is my way in this scenario and it's work awesome for me:<div>(VMS1 is the gluster volume, as you can see)</div><div><br></div><div>gluster vol heal VMS1 enable<br>gluster vol set VMS1 network.ping-timeout 2<br>gluster vol set VMS1 performance.quick-read off<br>gluster vol set VMS1 performance.read-ahead off<br>gluster vol set VMS1 performance.io-cache off<br>gluster vol set VMS1 performance.low-prio-threads 32<br>gluster vol set VMS1 performance.write-behind off<br>gluster vol set VMS1 performance.flush-behind off<br>gluster vol set VMS1 network.remote-dio disable<br>gluster vol set VMS1 performance.strict-o-direct on<br>gluster vol set VMS1 cluster.quorum-type fixed<br>gluster vol set VMS1 cluster.server-quorum-type none<br>gluster vol set VMS1 cluster.locking-scheme granular<br>gluster vol set VMS1 cluster.shd-max-threads 8<br>gluster vol set VMS1 cluster.shd-wait-qlength 10000<br>gluster vol set VMS1 cluster.data-self-heal-algorithm full<br>gluster vol set VMS1 cluster.favorite-child-policy mtime<br>gluster vol set VMS1 cluster.quorum-count 1<br>gluster vol set VMS1 cluster.quorum-reads false<br>gluster vol set VMS1 cluster.self-heal-daemon enable<br>gluster vol set VMS1 cluster.heal-timeout 5<br>gluster vol heal VMS1 granular-entry-heal enable<br>gluster vol set VMS1 features.shard on<br>gluster vol set VMS1 user.cifs off<br>gluster vol set VMS1 cluster.choose-local off<br>gluster vol set VMS1 client.event-threads 4<br>gluster vol set VMS1 server.event-threads 4<br>gluster vol set VMS1 performance.client-io-threads on<br>gluster vol set VMS1 network.ping-timeout 20<br>gluster vol set VMS1 server.tcp-user-timeout 20<br>gluster vol set VMS1 server.keepalive-time 10<br>gluster vol set VMS1 server.keepalive-interval 2<br>gluster vol set VMS1 server.keepalive-count 5<br>gluster vol set VMS1 cluster.lookup-optimize off</div><div><br></div><div>I have had created the replica 2 like this:</div><div>gluster vol create VMS1 replica 2 gluster1:/mnt/pve/dataglusterfs/vms/ gluster2:/mnt/pve/dataglusterfs/vms/</div><div>And to avoid split-brain I have had enabled thoses options above.</div><div>That I have had created the a folder like:</div><div>mkdir /vms1</div><div>After that I have had edit /etc/fstab like </div><div>in the first node:</div><div>gluster1:VMS1 /vms1 glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0</div><div>in the second node:</div><div>gluster2:VMS1 /vms1 glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster1 0 0</div><div>gluster1 and gluster2 it's a dedicated 10g nic and included in the /etc/hosts like </div><div>172.16.20.10 gluster1<br>172.16.20.20 gluster2<br></div><div><br></div><div>Than in both nodes I do</div><div>mount /vms1</div><div>Now everything is ok.</div><div>As I am using Proxmox VE here, I just create a storage entry in the Proxmox /etc/pve/storage.cfg file like:</div><div>dir: STG-VMS-1<br> path /vms1<br> content rootdir,images<br> preallocation metadata<br> prune-backups keep-all=1<br> shared 1<br></div><div><br></div><div>And I am ready to fly!</div><div><br></div><div>Hope this can help you in any way!</div><div><br></div><div>Cheers</div><div><br></div><div><br></div><div><br></div><div><br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em seg., 5 de jun. de 2023 às 12:20, Christian Schoepplein <<a href="mailto:christian.schoepplein@linova.de" target="_blank">christian.schoepplein@linova.de</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Gilberto, hi all,<br>
<br>
thanks a lot for all your answers.<br>
<br>
At first I changed both settings mentioned below and first test look good.<br>
<br>
Before changing the settings I was able to crash a new installed VM every <br>
time after a fresh installation by producing much i/o, e.g. when installing <br>
Libre Office. This always resulted in corrupt files inside the VM, but <br>
researching the qcow2 file with the qemu-img tool showed no errors for the <br>
file.<br>
<br>
I'll do further testing and will run more VMs on the volume during the next <br>
days, lets see how things go on and if further tweaking of the volume is <br>
necessary.<br>
<br>
Cheers,<br>
<br>
Chris<br>
<br>
<br>
On Fri, Jun 02, 2023 at 09:05:28AM -0300, Gilberto Ferreira wrote:<br>
>Try turn off this options:<br>
>performance.write-behind<br>
>performance.flush-behind<br>
><br>
>---<br>
>Gilberto Nunes Ferreira<br>
>(47) 99676-7530 - Whatsapp / Telegram<br>
><br>
><br>
><br>
><br>
><br>
><br>
>Em sex., 2 de jun. de 2023 às 07:55, Guillaume Pavese <<br>
><a href="mailto:guillaume.pavese@interactiv-group.com" target="_blank">guillaume.pavese@interactiv-group.com</a>> escreveu:<br>
><br>
> On oVirt / Redhat Virtualization, <br>
> the following Gluster volumes settings are recommended to be applied<br>
> (preferably at the creation of the volume)<br>
> These settings are important for data reliability, ( Note that Replica 3 or<br>
> Replica 2+1 is expected ) <br>
><br>
> performance.quick-read=off<br>
> performance.read-ahead=off<br>
> performance.io-cache=off<br>
> performance.low-prio-threads=32<br>
> network.remote-dio=enable<br>
> cluster.eager-lock=enable<br>
> cluster.quorum-type=auto<br>
> cluster.server-quorum-type=server<br>
> cluster.data-self-heal-algorithm=full<br>
> cluster.locking-scheme=granular<br>
> cluster.shd-max-threads=8<br>
> cluster.shd-wait-qlength=10000<br>
> features.shard=on<br>
> user.cifs=off<br>
> cluster.choose-local=off<br>
> client.event-threads=4<br>
> server.event-threads=4<br>
> performance.client-io-threads=on<br>
><br>
><br>
><br>
><br>
> Guillaume Pavese<br>
> Ingénieur Système et Réseau<br>
> Interactiv-Group<br>
><br>
><br>
> On Fri, Jun 2, 2023 at 5:33 AM W Kern <<a href="mailto:wkmail@bneit.com" target="_blank">wkmail@bneit.com</a>> wrote:<br>
><br>
> We use qcow2 with libvirt based kvm on many small clusters and have<br>
> found it to be exremely reliable though maybe not the fastest, though<br>
> some of that is most of our storage is SATA SSDs in a software RAID1<br>
> config for each brick.<br>
><br>
> What problems are you running into?<br>
><br>
> You just mention 'problems'<br>
><br>
> -wk<br>
><br>
> On 6/1/23 8:42 AM, Christian Schoepplein wrote:<br>
> > Hi,<br>
> ><br>
> > we'd like to use glusterfs for Proxmox and virtual machines with<br>
> qcow2<br>
> > disk images. We have a three node glusterfs setup with one volume and<br>
> > Proxmox is attached and VMs are created, but after some time, and I<br>
> think<br>
> > after much i/o is going on for a VM, the data inside the virtual<br>
> machine<br>
> > gets corrupted. When I copy files from or to our glusterfs<br>
> > directly everything is OK, I've checked the files with md5sum. So in<br>
> general<br>
> > our glusterfs setup seems to be OK I think..., but with the VMs and<br>
> the self<br>
> > growing qcow2 images there are problems. If I use raw images for the<br>
> VMs<br>
> > tests look better, but I need to do more testing to be sure, the<br>
> problem is<br>
> > a bit hard to reproduce :-(.<br>
> ><br>
> > I've also asked on a Proxmox mailinglist, but got no helpfull<br>
> response so<br>
> > far :-(. So maybe you have any helping hint what might be wrong with<br>
> our<br>
> > setup, what needs to be configured to use glusterfs as a storage<br>
> backend for<br>
> > virtual machines with self growing disk images. e.g. Any helpfull tip<br>
> would<br>
> > be great, because I am absolutely no glusterfs expert and also not a<br>
> expert<br>
> > for virtualization and what has to be done to let all components play<br>
> well<br>
> > together... Thanks for your support!<br>
> ><br>
> > Here some infos about our glusterfs setup, please let me know if you<br>
> need<br>
> > more infos. We are using Ubuntu 22.04 as operating system:<br>
> ><br>
> > root@gluster1:~# gluster --version<br>
> > glusterfs 10.1<br>
> > Repository revision: git://<a href="http://git.gluster.org/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.org/glusterfs.git</a><br>
> > Copyright (c) 2006-2016 Red Hat, Inc. <<a href="https://www.gluster.org/" rel="noreferrer" target="_blank">https://www.gluster.org/</a>><br>
> > GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>
> > It is licensed to you under your choice of the GNU Lesser<br>
> > General Public License, version 3 or any later version (LGPLv3<br>
> > or later), or the GNU General Public License, version 2 (GPLv2),<br>
> > in all cases as published by the Free Software Foundation.<br>
> > root@gluster1:~#<br>
> ><br>
> > root@gluster1:~# gluster v status gfs_vms<br>
> ><br>
> > Status of volume: gfs_vms<br>
> > Gluster process TCP Port RDMA Port <br>
> Online Pid<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick gluster1.linova.de:/glusterfs/sde1enc<br>
> > /brick 58448 0 Y <br>
> 1062218<br>
> > Brick gluster2.linova.de:/glusterfs/sdc1enc<br>
> > /brick 50254 0 Y <br>
> 20596<br>
> > Brick gluster3.linova.de:/glusterfs/sdc1enc<br>
> > /brick 52840 0 Y <br>
> 1627513<br>
> > Brick gluster1.linova.de:/glusterfs/sdf1enc<br>
> > /brick 49832 0 Y <br>
> 1062227<br>
> > Brick gluster2.linova.de:/glusterfs/sdd1enc<br>
> > /brick 56095 0 Y <br>
> 20612<br>
> > Brick gluster3.linova.de:/glusterfs/sdd1enc<br>
> > /brick 51252 0 Y <br>
> 1627521<br>
> > Brick gluster1.linova.de:/glusterfs/sdg1enc<br>
> > /brick 54991 0 Y <br>
> 1062230<br>
> > Brick gluster2.linova.de:/glusterfs/sde1enc<br>
> > /brick 60812 0 Y <br>
> 20628<br>
> > Brick gluster3.linova.de:/glusterfs/sde1enc<br>
> > /brick 59254 0 Y <br>
> 1627522<br>
> > Self-heal Daemon on localhost N/A N/A Y <br>
> 1062249<br>
> > Bitrot Daemon on localhost N/A N/A Y <br>
> 3591335<br>
> > Scrubber Daemon on localhost N/A N/A Y <br>
> 3591346<br>
> > Self-heal Daemon on <a href="http://gluster2.linova.de" rel="noreferrer" target="_blank">gluster2.linova.de</a> N/A N/A Y <br>
> 20645<br>
> > Bitrot Daemon on <a href="http://gluster2.linova.de" rel="noreferrer" target="_blank">gluster2.linova.de</a> N/A N/A Y <br>
> 987517<br>
> > Scrubber Daemon on <a href="http://gluster2.linova.de" rel="noreferrer" target="_blank">gluster2.linova.de</a> N/A N/A Y <br>
> 987588<br>
> > Self-heal Daemon on <a href="http://gluster3.linova.de" rel="noreferrer" target="_blank">gluster3.linova.de</a> N/A N/A Y <br>
> 1627568<br>
> > Bitrot Daemon on <a href="http://gluster3.linova.de" rel="noreferrer" target="_blank">gluster3.linova.de</a> N/A N/A Y <br>
> 1627543<br>
> > Scrubber Daemon on <a href="http://gluster3.linova.de" rel="noreferrer" target="_blank">gluster3.linova.de</a> N/A N/A Y <br>
> 1627554<br>
> > <br>
> > Task Status of Volume gfs_vms<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > There are no active volume tasks<br>
> > <br>
> > root@gluster1:~#<br>
> ><br>
> > root@gluster1:~# gluster v status gfs_vms detail<br>
> ><br>
> > Status of volume: gfs_vms<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster1.linova.de:/glusterfs/sde1enc/<br>
> brick<br>
> > TCP Port : 58448<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1062218<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sde1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.6TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699660<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster2.linova.de:/glusterfs/sdc1enc/<br>
> brick<br>
> > TCP Port : 50254<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 20596<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdc1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.6TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699660<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster3.linova.de:/glusterfs/sdc1enc/<br>
> brick<br>
> > TCP Port : 52840<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1627513<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdc1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.6TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699673<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster1.linova.de:/glusterfs/sdf1enc/<br>
> brick<br>
> > TCP Port : 49832<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1062227<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdf1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.4TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699632<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster2.linova.de:/glusterfs/sdd1enc/<br>
> brick<br>
> > TCP Port : 56095<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 20612<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdd1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.4TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699632<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster3.linova.de:/glusterfs/sdd1enc/<br>
> brick<br>
> > TCP Port : 51252<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1627521<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdd1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.4TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699658<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster1.linova.de:/glusterfs/sdg1enc/<br>
> brick<br>
> > TCP Port : 54991<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1062230<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sdg1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.5TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699629<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster2.linova.de:/glusterfs/sde1enc/<br>
> brick<br>
> > TCP Port : 60812<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 20628<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sde1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.5TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699629<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > Brick : Brick gluster3.linova.de:/glusterfs/sde1enc/<br>
> brick<br>
> > TCP Port : 59254<br>
> > RDMA Port : 0<br>
> > Online : Y<br>
> > Pid : 1627522<br>
> > File System : xfs<br>
> > Device : /dev/mapper/sde1enc<br>
> > Mount Options : rw,relatime,attr2,inode64,logbufs=8,logbsize=<br>
> 32k,noquota<br>
> > Inode Size : 512<br>
> > Disk Space Free : 3.5TB<br>
> > Total Disk Space : 3.6TB<br>
> > Inode Count : 390700096<br>
> > Free Inodes : 390699652<br>
> > <br>
> > root@gluster1:~#<br>
> ><br>
> > root@gluster1:~# gluster v info gfs_vms<br>
> ><br>
> > <br>
> > Volume Name: gfs_vms<br>
> > Type: Distributed-Replicate<br>
> > Volume ID: c70e9806-0463-44ea-818f-a6c824cc5a05<br>
> > Status: Started<br>
> > Snapshot Count: 0<br>
> > Number of Bricks: 3 x 3 = 9<br>
> > Transport-type: tcp<br>
> > Bricks:<br>
> > Brick1: gluster1.linova.de:/glusterfs/sde1enc/brick<br>
> > Brick2: gluster2.linova.de:/glusterfs/sdc1enc/brick<br>
> > Brick3: gluster3.linova.de:/glusterfs/sdc1enc/brick<br>
> > Brick4: gluster1.linova.de:/glusterfs/sdf1enc/brick<br>
> > Brick5: gluster2.linova.de:/glusterfs/sdd1enc/brick<br>
> > Brick6: gluster3.linova.de:/glusterfs/sdd1enc/brick<br>
> > Brick7: gluster1.linova.de:/glusterfs/sdg1enc/brick<br>
> > Brick8: gluster2.linova.de:/glusterfs/sde1enc/brick<br>
> > Brick9: gluster3.linova.de:/glusterfs/sde1enc/brick<br>
> > Options Reconfigured:<br>
> > features.scrub: Active<br>
> > features.bitrot: on<br>
> > cluster.granular-entry-heal: on<br>
> > storage.fips-mode-rchecksum: on<br>
> > transport.address-family: inet<br>
> > nfs.disable: on<br>
> > performance.client-io-threads: off<br>
> ><br>
> > root@gluster1:~#<br>
> ><br>
> > root@gluster1:~# gluster volume heal gms_vms<br>
> > Launching heal operation to perform index self heal on volume gms_vms<br>
> has<br>
> > been unsuccessful:<br>
> > Volume gms_vms does not exist<br>
> > root@gluster1:~# gluster volume heal gfs_vms<br>
> > Launching heal operation to perform index self heal on volume gfs_vms<br>
> has<br>
> > been successful<br>
> > Use heal info commands to check status.<br>
> > root@gluster1:~# gluster volume heal gfs_vms info<br>
> > Brick gluster1.linova.de:/glusterfs/sde1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster2.linova.de:/glusterfs/sdc1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster3.linova.de:/glusterfs/sdc1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster1.linova.de:/glusterfs/sdf1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster2.linova.de:/glusterfs/sdd1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster3.linova.de:/glusterfs/sdd1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster1.linova.de:/glusterfs/sdg1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster2.linova.de:/glusterfs/sde1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > Brick gluster3.linova.de:/glusterfs/sde1enc/brick<br>
> > Status: Connected<br>
> > Number of entries: 0<br>
> ><br>
> > root@gluster1:~#<br>
> ><br>
> > This are the warnings and errors I've found in the logs on our three<br>
> > servers...<br>
> ><br>
> > * Warnings on <a href="http://gluster1.linova.de" rel="noreferrer" target="_blank">gluster1.linova.de</a>:<br>
> ><br>
> > glusterd.log:[2023-05-31 23:56:00.032233 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 02:22:04.133256 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 02:44:00.046086 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 05:32:00.042698 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 08:18:00.040890 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 11:09:00.020843 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> > glusterd.log:[2023-06-01 13:55:00.319414 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f9b8d19eedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f9b8d245ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f9b8d244525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> ><br>
> > * Errors on <a href="http://gluster1.linova.de" rel="noreferrer" target="_blank">gluster1.linova.de</a>:<br>
> ><br>
> > glusterd.log:[2023-05-31 23:56:00.032251 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 02:22:04.133274 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 02:44:00.046099 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 05:32:00.042714 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 08:18:00.040914 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 11:09:00.020853 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > glusterd.log:[2023-06-01 13:21:57.752337 +0000] E [MSGID: 106525]<br>
> [glusterd-op-sm.c:4248:glusterd_dict_set_volid] 0-management: Volume<br>
> detail does not exist<br>
> > glusterd.log:[2023-06-01 13:21:57.752363 +0000] E [MSGID: 106289]<br>
> [glusterd-syncop.c:1947:gd_sync_task_begin] 0-management: Failed to<br>
> build payload for operation 'Volume Status'<br>
> > glusterd.log:[2023-06-01 13:55:00.319432 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> ><br>
> > * Warnings on <a href="http://gluster2.linova.de" rel="noreferrer" target="_blank">gluster2.linova.de</a>:<br>
> ><br>
> > [2023-05-31 20:26:37.975658 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f4ec1b5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f4ec1c02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f4ec1c01525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> ><br>
> > * Errors on <a href="http://gluster2.linova.de" rel="noreferrer" target="_blank">gluster2.linova.de</a>:<br>
> ><br>
> > [2023-05-31 20:26:37.975831 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> ><br>
> > * Warnings on <a href="http://gluster3.linova.de" rel="noreferrer" target="_blank">gluster3.linova.de</a>:<br>
> ><br>
> > [2023-05-31 22:26:44.245188 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-05-31 22:58:20.000849 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-06-01 01:26:19.990639 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-06-01 07:09:44.252654 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-06-01 07:36:49.803972 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-06-01 07:42:20.003401 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 4b0a8298-9284-4a24-8de0-f5c25aafb5c7<br>
> > [2023-06-01 08:43:55.561333 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> 7a63d6a0-feae-4349-b787-d0fc76b3db3a<br>
> > [2023-06-01 13:07:04.152591 +0000] W<br>
> [glusterd-locks.c:545:glusterd_mgmt_v3_lock] (-->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0x26edf)<br>
> [0x7f5f8ad5bedf] -->/usr/lib/x86_64-linux-gnu/glusterfs/10.1/xlator/<br>
> mgmt/glusterd.so(+0xcdad2) [0x7f5f8ae02ad2] -->/usr/lib/<br>
> x86_64-linux-gnu/glusterfs/10.1/xlator/mgmt/glusterd.so(+0xcc525)<br>
> [0x7f5f8ae01525] ) 0-management: Lock for gfs_vms held by<br>
> a410159b-12db-4cf7-bad5-c5c817679d1b<br>
> ><br>
> > * Errors on <a href="http://gluster3.linova.de" rel="noreferrer" target="_blank">gluster3.linova.de</a>:<br>
> ><br>
> > [2023-05-31 22:26:44.245214 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-05-31 22:58:20.000858 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 01:26:19.990648 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 07:09:44.252671 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 07:36:49.803986 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 07:42:20.003411 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 08:43:55.561349 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> > [2023-06-01 13:07:04.152610 +0000] E [MSGID: 106118]<br>
> [glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Unable to<br>
> acquire lock for gfs_vms<br>
> ><br>
> > Best regards and thanks again for any helpfull hint!<br>
> ><br>
> > Chris<br>
> > ________<br>
> ><br>
> ><br>
> ><br>
> > Community Meeting Calendar:<br>
> ><br>
> > Schedule -<br>
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> > Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> ________<br>
><br>
><br>
><br>
> Community Meeting Calendar:<br>
><br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
><br>
> Ce message et toutes les pièces jointes (ci-après le “message”) sont<br>
> établis à l’intention exclusive de ses destinataires et sont confidentiels.<br>
> Si vous recevez ce message par erreur, merci de le détruire et d’en avertir<br>
> immédiatement l’expéditeur. Toute utilisation de ce message non conforme a<br>
> sa destination, toute diffusion ou toute publication, totale ou partielle,<br>
> est interdite, sauf autorisation expresse. L’internet ne permettant pas<br>
> d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)<br>
> décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse<br>
> ou il aurait été modifié. IT, ES, UK. <br>
><br>
> ________<br>
><br>
><br>
><br>
> Community Meeting Calendar:<br>
><br>
> Schedule -<br>
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
<br>
>________<br>
><br>
><br>
><br>
>Community Meeting Calendar:<br>
><br>
>Schedule -<br>
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
>Gluster-users mailing list<br>
><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
-- <br>
Christian Schöpplein<br>
------------------------------------------------------------<br>
IT and Operations<br>
<br>
Linova Software GmbH Phone: +49 (0)89 4524668-39<br>
Ungererstraße 129 Fax: +49 (0)89 4524668-99<br>
80805 München<br>
<a href="http://www.linova.de" rel="noreferrer" target="_blank">http://www.linova.de</a> Email: <a href="mailto:christian.schoepplein@linova.de" target="_blank">christian.schoepplein@linova.de</a><br>
------------------------------------------------------------<br>
Geschäftsführer:<br>
Dr. Andreas Löhr, Tobias Weishäupl<br>
Registergericht:<br>
Amtsgericht München, HRB 172890<br>
USt-IdNr.: DE259281353<br>
<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>