[Gluster-Maintainers] QEMU integration Health Check on 3.10.0-0.1.rc0
Niels de Vos
ndevos at redhat.com
Fri Feb 17 11:06:01 UTC 2017
Thanks Prasanna!
You can just copy/paste the results in
https://github.com/gluster/glusterfs/issues/111 and attach the log
there. I do not think the issues you mentioned are blockers for the
release, but there should be bugs in bugzilla against the 3.10 version
filed. Checking with Poornima and Ravi about those should point to
existing reports or patches in the master branch.
Cheers,
Niels
On Fri, Feb 17, 2017 at 04:24:47PM +0530, Prasanna Kalever wrote:
> Hi Niels,
>
>
> Finally made some time to do health chekup
>
> Issues noticed:
>
> 1.
> # qemu-img create gluster://10.70.42.226/sample/ok.img 2G
> Formatting 'gluster://10.70.42.226/sample/ok.img', fmt=raw size=2147483648
> [2017-02-17 10:44:23.213343] E [MSGID: 108006]
> [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
>
> 2.
> # Also noticed readdir issue on the fuse mount log
> [2017-02-17 09:16:12.735043] E [MSGID: 129006]
> [readdir-ahead.c:576:rda_opendir] 0-sample-readdir-ahead: Dict get of
> key:readdir-filter-directories failed with :-2
>
>
> I think there is a patch from poornima to fix point 2.
> Will talk to Ravi for a fix on point 1.
>
>
> Noting Major on libvirt qemu logs as well.
>
>
> Rest of the things worked fine :)
>
> Things Checked:
> 1. Create a VM
> 2. Start a VM
> 3. Create live internal snapshots and delete one by one
> 4. Create External snaphosts and merge them live
>
>
> Please find the detailed Commands/Steps (ps -aux) in the attachment
> with all outputs.
>
>
> Also please let me know if anything is missing!
>
>
> Cheers!
> --
> Prasanna
> 1. Download Below Gluster packages from:
> Ref: https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.0rc0/Fedora/fedora-25/x86_64/
>
> glusterfs-libs-3.10.0-0.1.rc0.fc25.x86_64.rpm
>
> glusterfs-3.10.0-0.1.rc0.fc25.x86_64.rpm
> glusterfs-server-3.10.0-0.1.rc0.fc25.x86_64.rpm
>
> glusterfs-client-xlators-3.10.0-0.1.rc0.fc25.x86_64.rpm
> glusterfs-fuse-3.10.0-0.1.rc0.fc25.x86_64.rpm
> glusterfs-api-3.10.0-0.1.rc0.fc25.x86_64.rpm
>
> glusterfs-cli-3.10.0-0.1.rc0.fc25.x86_64.rpm
>
>
>
> 2. dnf install qemu-kvm
> Installed:
> SDL2.x86_64 2.0.5-3.fc25 aajohan-comfortaa-fonts.noarch 2.004-6.fc24
> adwaita-cursor-theme.noarch 3.22.0-1.fc25 adwaita-icon-theme.noarch 3.22.0-1.fc25
> alsa-lib.x86_64 1.1.1-2.fc25 at-spi2-atk.x86_64 2.22.0-1.fc25
> at-spi2-core.x86_64 2.22.0-1.fc25 atk.x86_64 2.22.0-1.fc25
> bluez-libs.x86_64 5.43-2.fc25 boost-iostreams.x86_64 1.60.0-10.fc25
> boost-random.x86_64 1.60.0-10.fc25 boost-system.x86_64 1.60.0-10.fc25
> boost-thread.x86_64 1.60.0-10.fc25 brlapi.x86_64 0.6.5-2.fc25
> brltty.x86_64 5.4-2.fc25 cairo.x86_64 1.14.8-1.fc25
> cairo-gobject.x86_64 1.14.8-1.fc25 celt051.x86_64 0.5.1.3-11.fc24
> colord-libs.x86_64 1.3.4-1.fc25 dconf.x86_64 0.26.0-1.fc25
> edk2-ovmf.noarch 20161105git3b25ca8-1.fc25 flac-libs.x86_64 1.3.2-1.fc25
> fontconfig.x86_64 2.12.1-1.fc25 fontpackages-filesystem.noarch 1.44-17.fc24
> gdk-pixbuf2-modules.x86_64 2.36.0-1.fc25 gperftools-libs.x86_64 2.5-2.fc25
> graphite2.x86_64 1.3.6-1.fc25 gsm.x86_64 1.0.16-1.fc25
> gtk-update-icon-cache.x86_64 3.22.7-1.fc25 gtk3.x86_64 3.22.7-1.fc25
> harfbuzz.x86_64 1.3.2-1.fc25 hicolor-icon-theme.noarch 0.15-3.fc24
> ipxe-roms-qemu.noarch 20160622-1.git0418631.fc25 jasper-libs.x86_64 1.900.13-2.fc25
> jbigkit-libs.x86_64 2.1-5.fc24 lcms2.x86_64 2.8-2.fc25
> libICE.x86_64 1.0.9-5.fc25 libSM.x86_64 1.2.2-4.fc24
> libX11.x86_64 1.6.4-4.fc25 libX11-common.noarch 1.6.4-4.fc25
> libXau.x86_64 1.0.8-6.fc24 libXcomposite.x86_64 0.4.4-8.fc24
> libXcursor.x86_64 1.1.14-6.fc24 libXdamage.x86_64 1.1.4-8.fc24
> libXext.x86_64 1.3.3-4.fc24 libXfixes.x86_64 5.0.3-1.fc25
> libXft.x86_64 2.3.2-4.fc24 libXi.x86_64 1.7.9-1.fc25
> libXinerama.x86_64 1.1.3-6.fc24 libXrandr.x86_64 1.5.1-1.fc25
> libXrender.x86_64 0.9.10-1.fc25 libXtst.x86_64 1.2.3-1.fc25
> libXxf86vm.x86_64 1.1.4-3.fc24 libasyncns.x86_64 0.8-10.fc24
> libcacard.x86_64 3:2.5.2-2.fc24 libdatrie.x86_64 0.2.9-3.fc25
> libdrm.x86_64 2.4.75-1.fc25 libepoxy.x86_64 1.3.1-3.fc25
> libfdt.x86_64 1.4.2-1.fc25 libgusb.x86_64 0.2.9-1.fc25
> libibverbs.x86_64 1.2.1-1.fc25 libiscsi.x86_64 1.15.0-2.fc24
> libjpeg-turbo.x86_64 1.5.1-0.fc25 libnfs.x86_64 1.9.8-2.fc24
> libogg.x86_64 2:1.3.2-5.fc24 libpciaccess.x86_64 0.13.4-3.fc24
> librados2.x86_64 1:10.2.4-2.fc25 librbd1.x86_64 1:10.2.4-2.fc25
> librdmacm.x86_64 1.1.0-1.fc25 libsndfile.x86_64 1.0.27-1.fc25
> libthai.x86_64 0.1.25-1.fc25 libtiff.x86_64 4.0.7-2.fc25
> libunwind.x86_64 1.1-11.fc24 libvorbis.x86_64 1:1.3.5-1.fc25
> libwayland-client.x86_64 1.12.0-1.fc25 libwayland-cursor.x86_64 1.12.0-1.fc25
> libwayland-server.x86_64 1.12.0-1.fc25 libxcb.x86_64 1.12-1.fc25
> libxshmfence.x86_64 1.2-3.fc24 lttng-ust.x86_64 2.8.1-2.fc25
> mesa-libEGL.x86_64 13.0.3-5.fc25 mesa-libGL.x86_64 13.0.3-5.fc25
> mesa-libgbm.x86_64 13.0.3-5.fc25 mesa-libglapi.x86_64 13.0.3-5.fc25
> mesa-libwayland-egl.x86_64 13.0.3-5.fc25 opus.x86_64 1.1.3-2.fc25
> pango.x86_64 1.40.3-1.fc25 pulseaudio-libs.x86_64 10.0-2.fc25
> qemu-common.x86_64 2:2.7.1-2.fc25 qemu-kvm.x86_64 2:2.7.1-2.fc25
> qemu-system-x86.x86_64 2:2.7.1-2.fc25 rest.x86_64 0.8.0-1.fc25
> seabios-bin.noarch 1.9.3-1.fc25 seavgabios-bin.noarch 1.9.3-1.fc25
> sgabios-bin.noarch 1:0.20110622svn-9.fc24 spice-server.x86_64 0.13.3-2.fc25
> usbredir.x86_64 0.7.1-2.fc24 virglrenderer.x86_64 0.5.0-1.20160411git61846f92f.fc25
> vte-profile.x86_64 0.46.1-1.fc25 vte3.x86_64 0.36.5-2.fc24
> xen-libs.x86_64 4.7.1-7.fc25 xen-licenses.x86_64 4.7.1-7.fc25
> yajl.x86_64 2.1.0-5.fc24
>
>
> 3. dnf install libvirt
> Installed:
> autogen-libopts.x86_64 5.18.10-1.fc25
> corosync.x86_64 2.4.2-1.fc25
> corosynclib.x86_64 2.4.2-1.fc25
> cyrus-sasl.x86_64 2.1.26-26.2.fc24
> cyrus-sasl-md5.x86_64 2.1.26-26.2.fc24
> dmidecode.x86_64 1:3.0-7.fc25
> gnutls-dane.x86_64 3.5.5-2.fc25
> gnutls-utils.x86_64 3.5.5-2.fc25
> iscsi-initiator-utils.x86_64 6.2.0.873-34.git4c1f2d9.fc25
> iscsi-initiator-utils-iscsiuio.x86_64 6.2.0.873-34.git4c1f2d9.fc25
> libcgroup.x86_64 0.41-9.fc25
> libqb.x86_64 1.0.1-1.fc25
> libvirt.x86_64 2.2.0-2.fc25
> libvirt-client.x86_64 2.2.0-2.fc25
> libvirt-daemon.x86_64 2.2.0-2.fc25
> libvirt-daemon-config-network.x86_64 2.2.0-2.fc25
> libvirt-daemon-config-nwfilter.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-interface.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-libxl.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-lxc.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-network.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-nodedev.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-nwfilter.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-qemu.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-secret.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-storage.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-uml.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-vbox.x86_64 2.2.0-2.fc25
> libvirt-daemon-driver-xen.x86_64 2.2.0-2.fc25
> libvirt-libs.x86_64 2.2.0-2.fc25
> libwsman1.x86_64 2.6.2-7.fc25
> libxslt.x86_64 1.1.28-13.fc25
> lzop.x86_64 1.03-15.fc25
> net-snmp-libs.x86_64 1:5.7.3-13.fc25
> netcf-libs.x86_64 0.2.8-4.fc24
> numad.x86_64 0.5-21.20150602git.fc24
> qemu-img.x86_64 2:2.7.1-2.fc25
> radvd.x86_64 2.14-1.fc25
> sheepdog.x86_64 1.0.1-2.fc25
> systemd-container.x86_64 231-10.fc25
> unbound-libs.x86_64 1.5.10-1.fc25
>
> 4. dnf install virt-install
> Installed:
> libosinfo.x86_64 1.0.0-1.fc25 libvirt-python.x86_64 2.2.0-1.fc25
> osinfo-db.noarch 20170211-1.fc25 osinfo-db-tools.x86_64 1.1.0-1.fc25
> python-chardet.noarch 2.3.0-1.fc25 python-gobject-base.x86_64 3.22.0-1.fc25
> python-ipaddr.noarch 2.1.10-5.fc25 python-libxml2.x86_64 2.9.3-4.fc25
> python-six.noarch 1.10.0-3.fc25 python2-pysocks.noarch 1.5.6-5.fc25
> python2-requests.noarch 2.10.0-4.fc25 python2-urllib3.noarch 1.15.1-3.fc25
> virt-install.noarch 1.4.0-5.fc25 virt-manager-common.noarch 1.4.0-5.fc25
>
> 5. Gluster Volume create and copy a VM file
> [root at dhcp42-226 ~]# rpm -qa | grep gluster
> glusterfs-client-xlators-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-fuse-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-server-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-libs-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-api-3.10.0-0.1.rc0.fc25.x86_64
> glusterfs-cli-3.10.0-0.1.rc0.fc25.x86_64
>
>
> [root at dhcp42-226 ~]# systemctl daemon-reload
>
> [root at dhcp42-226 ~]# systemctl restart glusterd
>
> [root at dhcp42-226 ~]# systemctl status glusterd
>
> [root at dhcp42-226 ~]# gluster peer probe 10.70.42.151
> peer probe: success.
> [root at dhcp42-226 ~]# gluster peer probe 10.70.42.149
> peer probe: success.
>
> [root at dhcp42-226 ~]# gluster pool list
> UUID Hostname State
> 31d0e050-ca0d-4580-b02a-4d35c05f9ce8 10.70.42.151 Connected
> 65be7fa2-3c8c-4755-adfb-5807d7b61f48 10.70.42.149 Connected
> 47fea1a7-2454-498f-9760-eb8feb36945c localhost Connected
>
> [root at dhcp42-226 ~]# gluster vol create sample replica 3 10.70.42.226:/br1 10.70.42.151:/br1 10.70.42.149:/br1 force
> volume create: sample: success: please start the volume to access data
>
> [root at dhcp42-226 ~]# gluster vol start sample
> volume start: sample: success
>
> [root at dhcp42-226 ~]# mount.glusterfs localhost:/sample /mnt
>
> 6. Copy the VM image to /mnt
>
>
> 7. deifine a file with gfapi access, with proper host addr, volume and image info,
> # virsh define HFM.xml
>
> 8. virsh start HFM
> [root at dhcp42-226 ~]# ps -aux | grep qemu
> qemu 4755 87.2 13.4 9212744 544556 ? Sl 15:12 0:22 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=HFM,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-HFM/master-key.aes -machine pc-i440fx-2.4,accel=tcg,usb=off -cpu Westmere -m 6000 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 4563f4d9-bc09-47e7-a258-f60e8b83b551 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-8-HFM/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file.driver=gluster,file.volume=sample,file.path=/fedora.img,file.server.0.type=tcp,file.server.0.host=10.70.42.226,file.server.0.port=24007,file.server.1.type=tcp,file.server.1.host=10.70.42.151,file.server.1.port=24007,file.server.2.type=tcp,file.server.2.host=10.70.42.149,file.server.2.port=24007,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:14:c6:eb,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
>
>
> [root at dhcp42-226 ~]# virsh list --all
> Id Name State
> ----------------------------------------------------
> - HFM running
>
>
> # virsh console HFM
> Got in and Everything look okay
> Shutdown
>
> Expected
> # virsh snapshot-create-as --domain HFM --name snap1
> error: internal error: internal inactive snapshots are not supported on 'network' disks using 'gluster' protocol
>
> [root at dhcp42-226 ~]# virsh snapshot-create HFM --xmlfile ./snap.xml --disk-only --reuse-external
> error: internal error: external inactive snapshots are not supported on 'network' disks using 'gluster' protocol
>
>
> # virsh start HFM
>
> [root at dhcp42-226 ~]# virsh list --all
> Id Name State
> ----------------------------------------------------
> 9 HFM running
>
> [root at dhcp42-226 ~]# virsh snapshot-create-as --domain HFM --name snap1
> Domain snapshot snap1 created
> [root at dhcp42-226 ~]#
>
> [root at dhcp42-226 ~]# virsh snapshot-create-as --domain HFM --name snap1
> Domain snapshot snap1 created
>
> [root at dhcp42-226 ~]# virsh snapshot-create-as --domain HFM --name snap2
> Domain snapshot snap2 created
>
> [root at dhcp42-226 ~]# virsh snapshot-create-as --domain HFM --name snap3
> Domain snapshot snap3 created
>
> [root at dhcp42-226 ~]# qemu-img info /mnt/fedora.img
> image: /mnt/fedora.img
> file format: qcow2
> virtual size: 40G (42949672960 bytes)
> disk size: 2.3G
> cluster_size: 65536
> Snapshot list:
> ID TAG VM SIZE DATE VM CLOCK
> 1 snap1 272M 2017-02-17 15:50:02 00:09:14.004
> 2 snap2 272M 2017-02-17 15:50:29 00:09:16.338
> 3 snap3 272M 2017-02-17 15:50:56 00:09:18.057
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
>
>
> [root at dhcp42-226 ~]# virsh snapshot-list HFM --tree
> snap1
> |
> +- snap2
> |
> +- snap3
>
>
> [root at dhcp42-226 ~]# ps -aux | grep qemu
> qemu 5340 28.4 39.8 9393796 1613048 ? Sl 15:39 3:33 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=HFM,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-HFM/master-key.aes -machine pc-i440fx-2.4,accel=tcg,usb=off -cpu Westmere -m 6000 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 4563f4d9-bc09-47e7-a258-f60e8b83b551 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-HFM/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file.driver=gluster,file.volume=sample,file.path=/fedora.img,file.server.0.type=tcp,file.server.0.host=10.70.42.226,file.server.0.port=24007,file.server.1.type=tcp,file.server.1.host=10.70.42.151,file.server.1.port=24007,file.server.2.type=tcp,file.server.2.host=10.70.42.149,file.server.2.port=24007,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:14:c6:eb,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
>
>
> [root at dhcp42-226 ~]# virsh snapshot-delete HFM --current
> Domain snapshot snap3 deleted
>
> [root at dhcp42-226 ~]# virsh snapshot-delete HFM --current
> Domain snapshot snap2 deleted
>
> [root at dhcp42-226 ~]# virsh snapshot-delete HFM --current
> Domain snapshot snap1 deleted
>
>
>
> [root at dhcp42-226 ~]# virsh snapshot-list HFM
> Name Creation Time State
> ------------------------------------------------------------
>
>
>
> ***** Lets Play with External snapshots:
>
> [root at dhcp42-226 ~]# qemu-img create -f qcow2 -b gluster://127.0.0.1/sample/fedora.img gluster://127.0.0.1/sample/newsnap.qcow2
> [2017-02-17 10:23:28.035119] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> Formatting 'gluster://127.0.0.1/sample/newsnap.qcow2', fmt=qcow2 size=42949672960 backing_file=gluster://127.0.0.1/sample/fedora.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
> [2017-02-17 10:23:30.113228] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:23:31.138768] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:23:32.141147] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:23:34.018123] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [root at dhcp42-226 ~]#
>
>
>
> [root at dhcp42-226 ~]# qemu-img info /mnt/fedora.img
> image: /mnt/fedora.img
> file format: qcow2
> virtual size: 40G (42949672960 bytes)
> disk size: 1.5G
> cluster_size: 65536
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
> [root at dhcp42-226 ~]# qemu-img info /mnt/newsnap.qcow2
> image: /mnt/newsnap.qcow2
> file format: qcow2
> virtual size: 40G (42949672960 bytes)
> disk size: 193K
> cluster_size: 65536
> backing file: gluster://127.0.0.1/sample/fedora.img
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
>
> [root at dhcp42-226 ~]# virsh snapshot-create HFM --xmlfile ./snap.xml --disk-only --reuse-external
> Domain snapshot 1487327268 created from './snap.xml'
> [root at dhcp42-226 ~]# virsh snapshot-list HFM
> Name Creation Time State
> ------------------------------------------------------------
> 1487327268 2017-02-17 15:57:48 +0530 disk-snapshot
>
> [root at dhcp42-226 ~]#
>
> [root at dhcp42-226 ~]# qemu-img create -f qcow2 -b gluster://10.70.42.226/sample/fedora.img gluster://10.70.42.226/sample/newsnap1.qcow2
> [2017-02-17 10:30:26.614475] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> Formatting 'gluster://10.70.42.226/sample/newsnap1.qcow2', fmt=qcow2 size=42949672960 backing_file=gluster://10.70.42.226/sample/fedora.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
> [2017-02-17 10:30:27.716624] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:28.725452] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:29.732419] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:30.606138] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
>
> [root at dhcp42-226 ~]# virsh snapshot-create HFM --xmlfile ./snap1.xml --disk-only --reuse-external
> Domain snapshot 1487327463 created from './snap1.xml'
>
>
> [root at dhcp42-226 ~]# qemu-img info /mnt/newsnap1.qcow2
> image: /mnt/newsnap1.qcow2
> file format: qcow2
> virtual size: 40G (42949672960 bytes)
> disk size: 193K
> cluster_size: 65536
> backing file: gluster://10.70.42.226/sample/fedora.img
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
>
> [root at dhcp42-226 ~]# qemu-img create -f qcow2 -b gluster://10.70.42.226/sample/fedora.img gluster://10.70.42.226/sample/newsnap2.qcow2
> [2017-02-17 10:30:47.398684] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> Formatting 'gluster://10.70.42.226/sample/newsnap2.qcow2', fmt=qcow2 size=42949672960 backing_file=gluster://10.70.42.226/sample/fedora.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
> [2017-02-17 10:30:48.388020] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:49.422176] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:50.413815] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> [2017-02-17 10:30:51.293244] E [MSGID: 108006] [afr-common.c:4778:afr_notify] 0-sample-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
>
> [root at dhcp42-226 ~]# virsh snapshot-create HFM --xmlfile ./snap2.xml --disk-only --reuse-external
> Domain snapshot 1487327474 created from './snap2.xml'
> [root at dhcp42-226 ~]#
>
>
> [root at dhcp42-226 ~]# qemu-img info /mnt/newsnap2.qcow2
> image: /mnt/newsnap2.qcow2
> file format: qcow2
> virtual size: 40G (42949672960 bytes)
> disk size: 193K
> cluster_size: 65536
> backing file: gluster://10.70.42.226/sample/fedora.img
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
>
> [root at dhcp42-226 ~]# virsh snapshot-list HFM
> Name Creation Time State
> ------------------------------------------------------------
> 1487327268 2017-02-17 15:57:48 +0530 disk-snapshot
> 1487327463 2017-02-17 16:01:03 +0530 disk-snapshot
> 1487327474 2017-02-17 16:01:14 +0530 disk-snapshot
>
> [root at dhcp42-226 ~]# virsh snapshot-list HFM --tree
> 1487327268
> |
> +- 1487327463
> |
> +- 1487327474
>
>
> [root at dhcp42-226 ~]#
>
>
> [root at dhcp42-226 ~]# virsh blockcommit HFM vda --active --pivot --wait --verbose
> Block commit: [100 %]
> Successfully pivoted
>
>
>
> [root at dhcp42-226 ~]# virsh snapshot-list HFM --tree
> 1487327268
> |
> +- 1487327463
> |
> +- 1487327474
>
>
> [root at dhcp42-226 ~]# virsh snapshot-delete --domain HFM 1487327474 --metadata
> Domain snapshot 1487327474 deleted
>
> [root at dhcp42-226 ~]# virsh snapshot-delete --domain HFM 1487327463 --metadata
> Domain snapshot 1487327463 deleted
>
> [root at dhcp42-226 ~]# virsh snapshot-delete --domain HFM 1487327268 --metadata
> Domain snapshot 1487327268 deleted
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20170217/39fc4bdf/attachment-0001.sig>
More information about the maintainers
mailing list