[Gluster-users] Issue enabling use-compound-fops with gfapi
Paolo Margara
paolo.margara at polito.it
Fri Sep 14 12:22:41 UTC 2018
Hi list,
on a dev system I'm testing some options that are supposed to give
improved performance, I'm running ovirt with gfapi enabled with gluster
3.12.13 and when I set "cluster.use-compound-fops" to "on" every VMs are
paused due to a storage IO error while the file system continue to be
accessible through fuse client (only gfapi application stop working).
In the qemu log file I could see these gluster related messages:
2018-09-14T11:49:37.020942Z qemu-kvm: terminating on signal 15 from pid
1513 (/usr/sbin/libvirtd)
2018-09-14T11:49:42.766431Z qemu-kvm: Failed to flush the L2 table
cache: Input/output error
2018-09-14T11:49:44.766853Z qemu-kvm: Failed to flush the refcount block
cache: Input/output error
[2018-09-14 11:49:44.869112] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-1: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869284] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-0: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869515] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-2: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869639] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-3: All subvolumes are down. Going
offline until atleast one of them comes back up.
[2018-09-14 11:49:44.869823] E [MSGID: 108006]
[afr-common.c:5118:__afr_handle_child_down_event]
0-vm-images-repo-demo-replicate-4: All subvolumes are down. Going
offline until atleast one of them comes back up.
2018-09-14 11:49:45.827+0000: shutting down, reason=destroyed
If I set "cluster.use-compound-fops" to "off" everything start working
correctly again.
There is something else to configure or this is a bug?
Greetings,
Paolo
More information about the Gluster-users
mailing list