[Gluster-users] fuse access seems ok but native qemu access is too slow

algol at sapo.pt algol at sapo.pt
Fri Mar 19 18:19:54 UTC 2021


Sorry, maybe I wasn't clear. When I say "gluster native acces" I am  
talking about libgfapi.

That was just the doc I followed to set it up.  It's working... but  
too slow (and log verbose) when starting.

Thanks

RM

----- Mensagem de Strahil Nikolov <hunter86_bg at yahoo.com> ---------

  Data: Fri, 19 Mar 2021 17:09:41 +0000 (UTC)

  De: Strahil Nikolov <hunter86_bg at yahoo.com>

  Assunto: Re: [Gluster-users] fuse access seems ok but native qemu  
access is too        slow

  Para: algol at sapo.pt, gluster-users at gluster.org

> Many oVirt users are using qemu with libgfapi for Virtualization needs.
>
> Have you checked this one :  
>
>   
> https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/libgfapi%20with%20qemu%20libvirt.md
>
> I've used only oVirt's way (which obscures the details), but the  
> article looks legit.
>
> Best Regards,
>
> Strahil Nikolov
>
>> Hi,
>>
>> I have just configure a distributed-replicate 2x3 gluster volume.
>>
>> It seems to be working fine when I mount the volume with fuse but  
>> native gluster access with qemu-img is much slower and outputs  
>> errors. Log files are filled with warning and errors.
>>
>> Starting vm with the created images is possible but it takes much  
>> more time when using gluster native access. Nevertheless, after  
>> starting up, there seems to be no penalty in using the gluster  
>> native access image being some times faster.
>>
>> Can anyone, please, help me in figuring out if there is and where  
>> is the problem?
>>
>> Thanks
>>
>> rm
>>
>> It seems to be working fine when I mount it with fuse:
>>
>> root at srv-31:~# mount -t glusterfs  
>> srv-32.lan.example.com/ssd-volume/[1] /mnt/ssd-volume/
>>
>> root at srv-31:~# echo 'This is a test' > /mnt/ssd-volume/test/text.txt
>>
>> root at srv-31:~# ll /data/glusterfs/ssd/brick[12]/brick/test/*
>>
>> -rw-r--r-- 2 root root 15 mar 17 14:08  
>> /data/glusterfs/ssd/brick1/brick/test/text.txt
>>
>> root at srv-31:~# time qemu-img create -f qcow2  
>> /mnt/ssd-volume/libvirt/images/test_5.qcow2.img 64G
>> Formatting '/mnt/ssd-volume/libvirt/images/test_5.qcow2.img',  
>> fmt=qcow2 size=68719476736 cluster_size=65536 lazy_refcounts=off  
>> refcount_bits=16
>>
>> real    0m0,204s
>>
>> user    0m0,009s
>>
>> sys     0m0,009s
>>
>> Creating a image file onteh mouted gluster volume is quick but I  
>> get some warnings on log files:
>>
>> root at srv-31:~# tail -f /var/log/glusterfs/glusterd.log   
>> /var/log/glusterfs/glustershd.log   
>> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick[12]-brick.log
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:37:11.301388 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.303694 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.305808 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.307868 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.309863 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.312139 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.317780 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.319007 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.321096 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.323111 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.325284 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.327319 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.329488 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.331792 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.333597 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.335864 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.339551 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.341110 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.343079 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.345174 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.349569 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.351920 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.353282 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.355272 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.357467 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.359584 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.361771 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.364013 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.366094 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.368094 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.370261 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.372389 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.375550 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> (...)
>>
>> [2021-03-19 11:37:11.477472 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.479897 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.482318 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> [2021-03-19 11:37:11.484489 +0000] W  
>> [dict.c:1532:dict_get_with_ref]  
>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/9.0/xlator/features/locks.so(+0x26c3e) [0x7fb1dbdccc3e] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_uint32+0x37) [0x7fb1e1a1f3f7] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get_with_ref+0x7d) [0x7fb1e1a1e86d] ) 0-dict: dict OR key (glusterfs.lk.lkmode) is NULL [Invalid  
>> argument]
>>
>> Native gluster access with qemu-img is much slower and outputs errors:
>>
>> root at srv-31:~# time qemu-img create -f qcow2  
>> gluster://srv-31.lan.example.com/ssd-volume/libvirt/images/test_6.qcow2.img  
>> 64G
>>
>> Formatting  
>> 'gluster://srv-31.lan.example.com/ssd-volume/libvirt/images/test_6.qcow2.img[2]', fmt=qcow2 size=68719476736 cluster_size=65536 lazy_refcounts=off  
>> refcount_bits=16
>>
>> [2021-03-19 11:47:42.490844 +0000] I  
>> [io-stats.c:3708:ios_sample_buf_size_configure] 0-ssd-volume:  
>> Configure ios_sample_buf  size is 1024 because ios_sample_interval  
>> is 0
>>
>> [2021-03-19 11:47:42.615942 +0000] E [MSGID: 108006]  
>> [afr-common.c:6146:__afr_handle_child_down_event]  
>> 0-ssd-volume-replicate-0: All subvolumes are down. Going offline  
>> until at least one of them comes back up.
>>
>> [2021-03-19 11:47:42.617033 +0000] E [MSGID: 108006]  
>> [afr-common.c:6146:__afr_handle_child_down_event]  
>> 0-ssd-volume-replicate-1: All subvolumes are down. Going offline  
>> until at least one of them comes back up.
>>
>> [2021-03-19 11:47:52.502174 +0000] I [io-stats.c:4038:fini]  
>> 0-ssd-volume: io-stats translator unloaded
>>
>> [2021-03-19 11:47:53.514364 +0000] I  
>> [io-stats.c:3708:ios_sample_buf_size_configure] 0-ssd-volume:  
>> Configure ios_sample_buf  size is 1024 because ios_sample_interval  
>> is 0
>>
>> [2021-03-19 11:47:53.654336 +0000] E [MSGID: 108006]  
>> [afr-common.c:6146:__afr_handle_child_down_event]  
>> 0-ssd-volume-replicate-0: All subvolumes are down. Going offline  
>> until at least one of them comes back up.
>>
>> [2021-03-19 11:47:53.655251 +0000] E [MSGID: 108006]  
>> [afr-common.c:6146:__afr_handle_child_down_event]  
>> 0-ssd-volume-replicate-1: All subvolumes are down. Going offline  
>> until at least one of them comes back up.
>>
>> [2021-03-19 11:48:03.525947 +0000] I [io-stats.c:4038:fini]  
>> 0-ssd-volume: io-stats translator unloaded
>>
>> real    0m22,068s
>>
>> user    0m0,055s
>>
>> sys     0m0,043s
>>
>> In both cases images are written correctly (and usable):
>>
>> root at srv-31:~# l  
>> /data/glusterfs/ssd/brick[12]/brick/libvirt/images/test_[56]*
>>
>> -rw-r--r-- 2 root root 193K mar 19 11:37  
>> /data/glusterfs/ssd/brick2/brick/libvirt/images/test_5.qcow2.img
>>
>> -rw------- 2 root root 193K mar 19 11:47  
>> /data/glusterfs/ssd/brick2/brick/libvirt/images/test_6.qcow2.img
>>
>> But I get lots of warnings on log files:
>>
>> root at srv-31:~# tail -f /var/log/glusterfs/glusterd.log   
>> /var/log/glusterfs/glustershd.log   
>> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick[12]-brick.log
>>
>> ==> /var/log/glusterfs/glusterd.log <==
>>
>> [2021-03-19 11:47:42.483736 +0000] I [MSGID: 106496]  
>> [glusterd-handshake.c:969:__server_getspec] 0-management: Received  
>> mount request for volume ssd-volume
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:42.498908 +0000] I  
>> [addr.c:54:compare_addr_and_update]  
>> 0-/data/glusterfs/ssd/brick1/brick: allowed = "*", received addr =  
>> "127.0.0.1"
>>
>> [2021-03-19 11:47:42.498967 +0000] I [login.c:110:gf_auth]  
>> 0-auth/login: allowed user names:  
>> 80c9a2bf-6fad-4023-8458-a175cef7f681
>>
>> [2021-03-19 11:47:42.498989 +0000] I [MSGID: 115029]  
>> [server-handshake.c:561:server_setvolume] 0-ssd-volume-server:  
>> accepted client from  
>> CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0 (version: 9.0) with subvol  
>> /data/glusterfs/ssd/brick1/brick
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:42.502029 +0000] I  
>> [addr.c:54:compare_addr_and_update]  
>> 0-/data/glusterfs/ssd/brick2/brick: allowed = "*", received addr =  
>> "127.0.0.1"
>>
>> [2021-03-19 11:47:42.502087 +0000] I [login.c:110:gf_auth]  
>> 0-auth/login: allowed user names:  
>> 80c9a2bf-6fad-4023-8458-a175cef7f681
>>
>> [2021-03-19 11:47:42.502109 +0000] I [MSGID: 115029]  
>> [server-handshake.c:561:server_setvolume] 0-ssd-volume-server:  
>> accepted client from  
>> CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0 (version: 9.0) with subvol  
>> /data/glusterfs/ssd/brick2/brick
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:42.615182 +0000] W [socket.c:767:__socket_rwv]  
>> 0-tcp.ssd-volume-server: readv on 127.0.0.1:49151 failed (No data  
>> available)
>>
>> [2021-03-19 11:47:42.615258 +0000] I [MSGID: 115036]  
>> [server.c:500:server_rpc_notify] 0-ssd-volume-server: disconnecting  
>> connection  
>> [{client-uid=CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:42.615215 +0000] W [socket.c:767:__socket_rwv]  
>> 0-tcp.ssd-volume-server: readv on 127.0.0.1:49150 failed (No data  
>> available)
>>
>> [2021-03-19 11:47:42.615316 +0000] I [MSGID: 115036]  
>> [server.c:500:server_rpc_notify] 0-ssd-volume-server: disconnecting  
>> connection  
>> [{client-uid=CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:42.615459 +0000] I [MSGID: 101055]  
>> [client_t.c:397:gf_client_unref] 0-ssd-volume-server: Shutting down  
>> connection  
>> CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:42.615524 +0000] I [MSGID: 101055]  
>> [client_t.c:397:gf_client_unref] 0-ssd-volume-server: Shutting down  
>> connection  
>> CTX_ID:d0cc883b-859c-4143-ab22-83e72673904a-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:53.522422 +0000] I  
>> [addr.c:54:compare_addr_and_update]  
>> 0-/data/glusterfs/ssd/brick1/brick: allowed = "*", received addr =  
>> "127.0.0.1"
>>
>> [2021-03-19 11:47:53.522480 +0000] I [login.c:110:gf_auth]  
>> 0-auth/login: allowed user names:  
>> 80c9a2bf-6fad-4023-8458-a175cef7f681
>>
>> [2021-03-19 11:47:53.522502 +0000] I [MSGID: 115029]  
>> [server-handshake.c:561:server_setvolume] 0-ssd-volume-server:  
>> accepted client from  
>> CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0 (version: 9.0) with subvol  
>> /data/glusterfs/ssd/brick1/brick
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:53.525344 +0000] I  
>> [addr.c:54:compare_addr_and_update]  
>> 0-/data/glusterfs/ssd/brick2/brick: allowed = "*", received addr =  
>> "127.0.0.1"
>>
>> [2021-03-19 11:47:53.525396 +0000] I [login.c:110:gf_auth]  
>> 0-auth/login: allowed user names:  
>> 80c9a2bf-6fad-4023-8458-a175cef7f681
>>
>> [2021-03-19 11:47:53.525419 +0000] I [MSGID: 115029]  
>> [server-handshake.c:561:server_setvolume] 0-ssd-volume-server:  
>> accepted client from  
>> CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0 (version: 9.0) with subvol  
>> /data/glusterfs/ssd/brick2/brick
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:53.653652 +0000] W [socket.c:767:__socket_rwv]  
>> 0-tcp.ssd-volume-server: readv on 127.0.0.1:49146 failed (No data  
>> available)
>>
>> [2021-03-19 11:47:53.653731 +0000] I [MSGID: 115036]  
>> [server.c:500:server_rpc_notify] 0-ssd-volume-server: disconnecting  
>> connection  
>> [{client-uid=CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:53.653752 +0000] W [socket.c:767:__socket_rwv]  
>> 0-tcp.ssd-volume-server: readv on 127.0.0.1:49144 failed (No data  
>> available)
>>
>> [2021-03-19 11:47:53.653840 +0000] I [MSGID: 115036]  
>> [server.c:500:server_rpc_notify] 0-ssd-volume-server: disconnecting  
>> connection  
>> [{client-uid=CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:47:53.653928 +0000] I [MSGID: 101055]  
>> [client_t.c:397:gf_client_unref] 0-ssd-volume-server: Shutting down  
>> connection  
>> CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:47:53.654040 +0000] I [MSGID: 101055]  
>> [client_t.c:397:gf_client_unref] 0-ssd-volume-server: Shutting down  
>> connection  
>> CTX_ID:53e36496-2f5a-4599-9d6b-aeda7822a7a0-GRAPH_ID:0-PID:53235-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0
>>
>> ==> /var/log/glusterfs/glusterd.log <==
>>
>> [2021-03-19 11:47:53.507207 +0000] I [MSGID: 106496]  
>> [glusterd-handshake.c:969:__server_getspec] 0-management: Received  
>> mount request for volume ssd-volume
>>
>> Starting a vm with a virtual disk on the mounted gluster volume  
>> (-drive file=/mnt/ssd-volume/libvirt/images/test_6.qcow2.img)is  
>> quick:
>>
>> root at srv-31:~# time virsh start vm-11
>>
>> Domain vm-11 started
>>
>> real    0m0,277s
>>
>> user    0m0,020s
>>
>> sys     0m0,004s
>>
>> Statring a vm with a virtual disk on gluster volume with native  
>> access (-drive  
>> file=gluster://srv-31.lan.example.com:24007/ssd-volume/libvirt/images/test_7.qcow2.img)is much  
>> slower:
>>
>> root at srv-31:~# time virsh start vm-11
>>
>> Domain vm-11 started
>>
>> real    0m22,339s
>>
>> user    0m0,021s
>>
>> sys     0m0,007s
>>
>> From time to time, even without user operations, log files report errors:
>>
>> root at srv-31:~# tail -f /var/log/glusterfs/glusterd.log   
>> /var/log/glusterfs/glustershd.log   
>> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick[12]-brick.log
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:45:09.525079 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> [2021-03-19 11:45:09.525212 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=11817}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:4b47408f-323c-4c6a-9a20-2ae2a3a2cdb8-GRAPH_ID:3-PID:2291-HOST:srv-32-PC_NAME:ssd-volume-client-0-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:45:13.525581 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> [2021-03-19 11:45:13.525711 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=10055}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:4b47408f-323c-4c6a-9a20-2ae2a3a2cdb8-GRAPH_ID:3-PID:2291-HOST:srv-32-PC_NAME:ssd-volume-client-3-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:45:38.829803 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=11820}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:974f4637-64ef-42e6-afad-1dc9c67c4a43-GRAPH_ID:3-PID:2062-HOST:srv-33-PC_NAME:ssd-volume-client-0-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:45:38.829970 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=10058}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:974f4637-64ef-42e6-afad-1dc9c67c4a43-GRAPH_ID:3-PID:2062-HOST:srv-33-PC_NAME:ssd-volume-client-3-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:45:38.829721 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick2-brick.log <==
>>
>> [2021-03-19 11:45:38.829883 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> [2021-03-19 11:45:50.005995 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> [2021-03-19 11:45:50.006115 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=10082}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:4945546f-f368-4fa7-8bfc-3dd7abda5d1b-GRAPH_ID:3-PID:2486-HOST:srv-31-PC_NAME:ssd-volume-client-3-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/bricks/data-glusterfs-ssd-brick1-brick.log <==
>>
>> [2021-03-19 11:45:50.006096 +0000] E [MSGID: 113002]  
>> [posix-entry-ops.c:682:posix_mkdir] 0-ssd-volume-posix: gfid is  
>> null for (null) [Invalid argument]
>>
>> [2021-03-19 11:45:50.006212 +0000] E [MSGID: 115056]  
>> [server-rpc-fops_v2.c:497:server4_mkdir_cbk] 0-ssd-volume-server:  
>> MKDIR info [{frame=11844}, {MKDIR_path=},  
>> {uuid_utoa=00000000-0000-0000-0000-000000000001}, {bname=},  
>> {client=CTX_ID:4945546f-f368-4fa7-8bfc-3dd7abda5d1b-GRAPH_ID:3-PID:2486-HOST:srv-31-PC_NAME:ssd-volume-client-0-RECON_NO:-0}, {error-xlator=ssd-volume-posix}, {errno=22}, {error=Invalid  
>> argument}]
>>
>> ==> /var/log/glusterfs/glustershd.log <==
>>
>> [2021-03-19 11:45:50.006255 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-3: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> [2021-03-19 11:45:50.006352 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-0: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> [2021-03-19 11:45:50.006408 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-4: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> [2021-03-19 11:45:50.006505 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-5: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> [2021-03-19 11:45:50.006562 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-1: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> [2021-03-19 11:45:50.006645 +0000] E [MSGID: 114031]  
>> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]  
>> 3-ssd-volume-client-2: remote operation failed. [{path=(null)},  
>> {errno=22}, {error=Invalid argument}]
>>
>> Everything seems to be ok
>>
>> root at srv-31:~# gluster volume status ssd-volume
>>
>> Status of volume: ssd-volume
>>
>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>>
>> ------------------------------------------------------------------------------
>>
>> Brick  
>> srv-31.lan.example.com:/data/glusterfs/ssd/brick1/brick                
>> 49156     0          Y       9242
>>
>> Brick  
>> srv-32.lan.example.com:/data/glusterfs/ssd/brick1/brick                
>> 49156     0          Y       7968
>>
>> Brick  
>> srv-33.lan.example.com:/data/glusterfs/ssd/brick1/brick                
>> 49156     0          Y       8211
>>
>> Brick  
>> srv-31.lan.example.com:/data/glusterfs/ssd/brick2/brick                
>> 49157     0          Y       9258
>>
>> Brick  
>> srv-32.lan.example.com:/data/glusterfs/ssd/brick2/brick                
>> 49157     0          Y       7984
>>
>> Brick  
>> srv-33.lan.example.com:/data/glusterfs/ssd/brick2/brick                
>> 49157     0          Y       8227
>>
>> Self-heal Daemon on localhost               N/A       N/A         
>> Y       2486
>>
>> Self-heal Daemon on  
>> srv-32.lan.example.com                                  N/A        
>> N/A        Y       2291
>>
>> Self-heal Daemon on  
>> srv-33.lan.example.com                                  N/A        
>> N/A        Y       2062
>>
>> Task Status of Volume ssd-volume
>>
>> ------------------------------------------------------------------------------
>>
>> There are no active volume tasks
>>
>> root at srv-31:~# gluster volume info ssd-volume
>>
>> Volume Name: ssd-volume
>>
>> Type: Distributed-Replicate
>>
>> Volume ID: a6f3426f-5b33-404b-ab61-581e24e0c36d
>>
>> Status: Started
>>
>> Snapshot Count: 0
>>
>> Number of Bricks: 2 x 3 = 6
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: srv-31.lan.example.com:/data/glusterfs/ssd/brick1/brick
>>
>> Brick2: srv-32.lan.example.com:/data/glusterfs/ssd/brick1/brick
>>
>> Brick3: srv-33.lan.example.com:/data/glusterfs/ssd/brick1/brick
>>
>> Brick4: srv-31.lan.example.com:/data/glusterfs/ssd/brick2/brick
>>
>> Brick5: srv-32.lan.example.com:/data/glusterfs/ssd/brick2/brick
>>
>> Brick6: srv-33.lan.example.com:/data/glusterfs/ssd/brick2/brick
>>
>> Options Reconfigured:
>>
>> server.allow-insecure: on
>>
>> storage.owner-gid: 64055
>>
>> storage.owner-uid: 64055
>>
>> ssl.cipher-list: HIGH:!SSLv2
>>
>> performance.client-io-threads: off
>>
>> nfs.disable: on
>>
>> transport.address-family: inet
>>
>> storage.fips-mode-rchecksum: on
>>
>> client.ssl: off
>>
>> server.ssl: off
>>
>> auth.ssl-allow: *
>>
>> root at srv-31:~# gluster peer status
>>
>> Number of Peers: 2
>>
>> Hostname: srv-32.lan.example.com
>>
>> Uuid: 0a11bfb9-5821-42f0-8ef3-18b36bfcfa8a
>>
>> State: Peer in Cluster (Connected)
>>
>> Hostname: srv-33.lan.example.com
>>
>> Uuid: 111d93d2-64aa-463f-8f33-7b53a3f05cab
>>
>> State: Peer in Cluster (Connected)
>>
>> root at srv-31:~# gluster pool list
>>
>> UUID                                     
>> Hostname                                State
>>
>> 0a11bfb9-5821-42f0-8ef3-18b36bfcfa8a     
>> srv-32.lan.example.com                  Connected
>>
>> 111d93d2-64aa-463f-8f33-7b53a3f05cab     
>> srv-33.lan.example.com                  Connected
>>
>> 67e9da67-90c5-486b-acb1-eb91abebcefb     
>> localhost                               Connected
>>
>> root at srv-32:~# gluster pool list
>>
>> UUID                                     
>> Hostname                                State
>>
>> 111d93d2-64aa-463f-8f33-7b53a3f05cab     
>> srv-33.lan.example.com                  Connected
>>
>> 67e9da67-90c5-486b-acb1-eb91abebcefb     
>> 10.0.0.31                               Connected
>>
>> 0a11bfb9-5821-42f0-8ef3-18b36bfcfa8a     
>> localhost                               Connected
>>
>> root at srv-33:~# gluster pool list
>>
>> UUID                                     
>> Hostname                                State
>>
>> 0a11bfb9-5821-42f0-8ef3-18b36bfcfa8a     
>> srv-32.lan.example.com                  Connected
>>
>> 67e9da67-90c5-486b-acb1-eb91abebcefb     
>> 10.0.0.31                               Connected
>>
>> 111d93d2-64aa-463f-8f33-7b53a3f05cab     
>> localhost                               Connected
>>
>> root at srv-31:~# cat /etc/glusterfs/glusterd.vol
>>
>> volume management
>>
>>     type mgmt/glusterd
>>
>>     option working-directory /var/lib/glusterd
>>
>>     option transport-type socket
>>
>>     option transport.socket.keepalive-time 10
>>
>>     option transport.socket.keepalive-interval 2
>>
>>     option transport.socket.read-fail-log off
>>
>>     option transport.socket.listen-port 24007
>>
>>     option ping-timeout 0
>>
>>     option event-threads 1
>>
>> #   option lock-timer 180
>>
>> #   option transport.address-family inet6
>>
>> #   option base-port 49152
>>
>>     option max-port  60999
>>
>>     option rpc-auth-allow-insecure on
>>
>> end-volume
>>
>> root at srv-31:~# ss -lntp
>>
>> State     Recv-Q    Send-Q       Local Address:Port        Peer  
>> Address:Port                                                                                   
>>
>> LISTEN    0         128                0.0.0.0:49153             
>> 0.0.0.0:*         
>> users:(("glusterfsd",pid=2455,fd=11))                                         
>>
>> LISTEN    0         128                0.0.0.0:49156             
>> 0.0.0.0:*         
>> users:(("glusterfsd",pid=9242,fd=11))                                         
>>
>> LISTEN    0         128                0.0.0.0:49157             
>> 0.0.0.0:*         
>> users:(("glusterfsd",pid=9258,fd=11))                                         
>>
>> LISTEN    0         128                0.0.0.0:24007             
>> 0.0.0.0:*         
>> users:(("glusterd",pid=1954,fd=10))                                           
>>
>> LISTEN    0         128                0.0.0.0:111               
>> 0.0.0.0:*         
>> users:(("rpcbind",pid=1716,fd=4),("systemd",pid=1,fd=36))                     
>>
>> LISTEN    0         128                0.0.0.0:2224              
>> 0.0.0.0:*         
>> users:(("pcsd",pid=2126,fd=4))                                                
>>
>> LISTEN    0         128                0.0.0.0:22                
>> 0.0.0.0:*         
>> users:(("sshd",pid=1963,fd=3))                                                
>>
>> LISTEN    0         20               127.0.0.1:25                
>> 0.0.0.0:*         
>> users:(("exim4",pid=2377,fd=3))                                               
>>
>> LISTEN    0         128                0.0.0.0:49152             
>> 0.0.0.0:*         
>> users:(("glusterfsd",pid=2446,fd=11))                                         
>>
>> LISTEN    0         128                   [::]:111                  
>> [::]:*         
>> users:(("rpcbind",pid=1716,fd=6),("systemd",pid=1,fd=38))                     
>>
>> LISTEN    0         128                   [::]:2224                 
>> [::]:*        users:(("pcsd",pid=2126,fd=5))
>>
>> root at srv-31:~# ss -tnp | grep gluster
>>
>> ESTAB   0         0                127.0.0.1:49149             
>> 127.0.1.1:24007     
>> users:(("glusterfsd",pid=2446,fd=9))                                          
>>
>> ESTAB   0         0                10.0.0.31:49144             
>> 10.0.0.33:49153     
>> users:(("glusterfs",pid=2486,fd=17))                                          
>>
>> ESTAB   0         0                127.0.0.1:24007             
>> 127.0.0.1:49145     
>> users:(("glusterd",pid=1954,fd=26))                                           
>>
>> ESTAB   0         0                127.0.1.1:49157             
>> 127.0.0.1:49140     
>> users:(("glusterfsd",pid=9258,fd=10))                                         
>>
>> ESTAB   0         0                127.0.0.1:49137             
>> 127.0.1.1:49156     
>> users:(("glusterfs",pid=2917,fd=10))                                          
>>
>> ESTAB   0         0                10.0.0.31:49146             
>> 10.0.0.32:49152     
>> users:(("glusterfs",pid=2486,fd=25))                                          
>>
>> ESTAB   0         0                10.0.0.31:49153             
>> 10.0.0.33:49147     
>> users:(("glusterfsd",pid=2455,fd=270))                                        
>>
>> ESTAB   0         0                127.0.1.1:24007             
>> 127.0.0.1:49148     
>> users:(("glusterd",pid=1954,fd=12))                                           
>>
>> ESTAB   0         0                10.0.0.31:49157             
>> 10.0.0.33:49141     
>> users:(("glusterfsd",pid=9258,fd=271))                                        
>>
>> ESTAB   0         0                127.0.1.1:24007             
>> 127.0.0.1:49131     
>> users:(("glusterd",pid=1954,fd=31))                                           
>>
>> ESTAB   0         0                10.0.0.31:49152             
>> 10.0.0.32:49146     
>> users:(("glusterfsd",pid=2446,fd=271))                                        
>>
>> ESTAB   0         0                10.0.0.31:49134             
>> 10.0.0.33:24007     
>> users:(("glusterd",pid=1954,fd=33))                                           
>>
>> ESTAB   0         0                10.0.0.31:49127             
>> 10.0.0.33:49157     
>> users:(("glusterfs",pid=2917,fd=11))                                          
>>
>> ESTAB   0         0                127.0.0.1:49142             
>> 127.0.1.1:49156     
>> users:(("glusterfs",pid=2486,fd=15))                                          
>>
>> ESTAB   0         0                127.0.0.1:49135             
>> 127.0.1.1:49157     
>> users:(("glusterfs",pid=2917,fd=14))                                          
>>
>> ESTAB   0         0                127.0.0.1:49145             
>> 127.0.0.1:24007     
>> users:(("glusterfs",pid=2486,fd=9))                                           
>>
>> ESTAB   0         0                127.0.1.1:49157             
>> 127.0.0.1:49135     
>> users:(("glusterfsd",pid=9258,fd=272))                                        
>>
>> ESTAB   0         0                127.0.1.1:49152             
>> 127.0.0.1:49141     
>> users:(("glusterfsd",pid=2446,fd=10))                                         
>>
>> ESTAB   0         0                10.0.0.31:49157             
>> 10.0.0.32:49144     
>> users:(("glusterfsd",pid=9258,fd=270))                                        
>>
>> ESTAB   0         0                127.0.1.1:24007             
>> 127.0.0.1:49132     
>> users:(("glusterd",pid=1954,fd=25))                                           
>>
>> ESTAB   0         0                127.0.0.1:49148             
>> 127.0.1.1:24007     
>> users:(("glusterfsd",pid=2455,fd=9))                                          
>>
>> ESTAB   0         0                10.0.0.31:49118             
>> 10.0.0.32:49156     
>> users:(("glusterfs",pid=2486,fd=13))                                          
>>
>> ESTAB   0         0                127.0.1.1:49156             
>> 127.0.0.1:49137     
>> users:(("glusterfsd",pid=9242,fd=272))                                        
>>
>> ESTAB   0         0                10.0.0.31:49121             
>> 10.0.0.33:49156     
>> users:(("glusterfs",pid=2917,fd=13))                                          
>>
>> ESTAB   0         0                127.0.0.1:49132             
>> 127.0.1.1:24007     
>> users:(("glusterfsd",pid=9242,fd=9))                                          
>>
>> ESTAB   0         0                127.0.0.1:49141             
>> 127.0.1.1:49152     
>> users:(("glusterfs",pid=2486,fd=22))                                          
>>
>> ESTAB   0         0                10.0.0.31:49156             
>> 10.0.0.32:49148     
>> users:(("glusterfsd",pid=9242,fd=270))                                        
>>
>> ESTAB   0         0                10.0.0.31:49152             
>> 10.0.0.33:49149     
>> users:(("glusterfsd",pid=2446,fd=270))                                        
>>
>> ESTAB   0         0                127.0.0.1:49140             
>> 127.0.1.1:49157     
>> users:(("glusterfs",pid=2486,fd=11))                                          
>>
>> ESTAB   0         0                10.0.0.31:49123             
>> 10.0.0.32:49157     
>> users:(("glusterfs",pid=2917,fd=12))                                          
>>
>> ESTAB   0         0                10.0.0.31:49141             
>> 10.0.0.33:49152     
>> users:(("glusterfs",pid=2486,fd=23))                                          
>>
>> ESTAB   0         0                127.0.0.1:49131             
>> 127.0.1.1:24007     
>> users:(("glusterfsd",pid=9258,fd=9))                                          
>>
>> ESTAB   0         0                10.0.0.31:49153             
>> 10.0.0.32:49145     
>> users:(("glusterfsd",pid=2455,fd=271))                                        
>>
>> ESTAB   0         0                10.0.0.31:49117             
>> 10.0.0.33:49156     
>> users:(("glusterfs",pid=2486,fd=5))                                           
>>
>> ESTAB   0         0                10.0.0.31:49114             
>> 10.0.0.32:49157     
>> users:(("glusterfs",pid=2486,fd=16))                                          
>>
>> ESTAB   0         0                127.0.1.1:49153             
>> 127.0.0.1:49138     
>> users:(("glusterfsd",pid=2455,fd=10))                                         
>>
>> ESTAB   0         0                10.0.0.31:49143             
>> 10.0.0.32:24007     
>> users:(("glusterfs",pid=2917,fd=9))                                           
>>
>> ESTAB   0         0                10.0.0.31:49113             
>> 10.0.0.33:49157     
>> users:(("glusterfs",pid=2486,fd=18))                                          
>>
>> ESTAB   0         0                10.0.0.31:49156             
>> 10.0.0.33:49143     
>> users:(("glusterfsd",pid=9242,fd=271))                                        
>>
>> ESTAB   0         0                127.0.0.1:49138             
>> 127.0.1.1:49153     
>> users:(("glusterfs",pid=2486,fd=19))                                          
>>
>> ESTAB   0         0                10.0.0.31:24007             
>> 10.0.0.33:49150     
>> users:(("glusterd",pid=1954,fd=32))                                           
>>
>> ESTAB   0         0                10.0.0.31:49131             
>> 10.0.0.32:49156     
>> users:(("glusterfs",pid=2917,fd=7))                                           
>>
>> ESTAB   0         0                10.0.0.31:49145             
>> 10.0.0.32:49153     
>> users:(("glusterfs",pid=2486,fd=8))                                           
>>
>> ESTAB   0         0                127.0.1.1:24007             
>> 127.0.0.1:49149     
>> users:(("glusterd",pid=1954,fd=13))                                           
>>
>> ESTAB   0         0                10.0.0.31:24007             
>> 10.0.0.32:49150     
>> users:(("glusterd",pid=1954,fd=7))                                            
>>
>> ESTAB   0         0                10.0.0.31:49151             
>> 10.0.0.32:24007     
>> users:(("glusterd",pid=1954,fd=8))                                            
>>
>> ESTAB   0         0                127.0.1.1:49156             
>> 127.0.0.1:49142    users:(("glusterfsd",pid=9242,fd=10))
>>
>> hostnames resolve correctly:
>>
>> root at srv-31:~# for i in {31..33}; do host srv-$i.lan.example.com; done
>>
>> srv-31.lan.example.com has address 10.0.0.31
>>
>> srv-32.lan.example.com has address 10.0.0.32
>>
>> srv-33.lan.example.com has address 10.0.0.33
>>
>> And network conectivity is ok:
>>
>> root at srv-31:~# for i in {31..33}; do ping -c 1 -q  
>> srv-$i.lan.example.com | echo "srv-$i ok"; done
>>
>> srv-31 ok
>>
>> srv-32 ok
>>
>> srv-33 ok
>>
>> root at srv-31:~# for i in {31..33}; do nc -v srv-$i.lan.example.com  
>> 24007; done
>>
>> Connection to srv-31.lan.example.com 24007 port [tcp/*] succeeded!
>>
>> Connection to srv-32.lan.example.com 24007 port [tcp/*] succeeded!
>>
>> Connection to srv-33.lan.example.com 24007 port [tcp/*] succeeded!
>>
>> root at srv-31:~# iptables -L
>>
>> Chain INPUT (policy ACCEPT)
>>
>> target     prot opt source               destination        
>>
>> Chain FORWARD (policy ACCEPT)
>>
>> target     prot opt source               destination        
>>
>> Chain OUTPUT (policy ACCEPT)
>>
>> target     prot opt source               destination
>>
>> ________
>>
>>
>>
>>
>>
>>
>>
>>  Community Meeting Calendar:
>>
>>
>>
>>  Schedule -
>>
>>  Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>
>>  Bridge: https://meet.google.com/cpu-eiue-hvk
>>
>>  Gluster-users mailing list
>>
>>  Gluster-users at gluster.org
>>
>>  https://lists.gluster.org/mailman/listinfo/gluster-users

----- Fim da mensagem de Strahil Nikolov <hunter86_bg at yahoo.com> -----





Ligações:
---------
[1] http://srv-31.lan.example.com/ssd-volume/libvirt/images/
[2] http://srv-31.lan.example.com/ssd-volume/libvirt/images/test_6.qcow2.im
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210319/e0742a7b/attachment-0001.html>


More information about the Gluster-users mailing list