[Gluster-users] Fwd: Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

Strahil hunter86_bg at yahoo.com
Thu Nov 28 05:02:49 UTC 2019


Hello Community,

I want to share what has happend to my Ovirt Lab.

After a patching from v6.5 to v6.6 everything seemed fine (yet I for got to test poweroff and poweron of VMs).

Then I have upgraded oVirt (minor release) and then I noticed some issues  - every VM I had powered off was not able to poweron.
Local fuse mount reports  issues  with the shards (see below). The strange thing was if I read a VM disk with dd (using root) everything was fine, after that vdsm' s dd  was also running (before root fails).

I have upgraded to v7.0 and rebooted all nodes. Now everything is fine and running.


Best Regards,
Strahil Nikolov---------- Forwarded message ----------
From: Strahil Nikolov <hunter86_bg at yahoo.com>
Date: Nov 27, 2019 10:41
Subject: Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
To: Nir Soffer <nsoffer at redhat.com>,Krutika Dhananjay <kdhananj at redhat.com>
Cc: Rafi Kavungal Chundattu Parambil <rkavunga at redhat.com>,users <users at ovirt.org>

> Hi Nir,All,
>
> it seems that 4.3.7 RC3 (and even RC4) are not the problem here(attached screenshot of oVirt running on v7 gluster).
> It seems strange that both my serious issues with oVirt are related to gluster issue (1st gluster v3  to v5 migration and now this one).
>
> I have just updated to gluster v7.0 (Centos 7 repos), and rebooted all nodes.
> Now both Engine and all my VMs are back online - so if you hit issues with 6.6 , you should give a try to 7.0 (and even 7.1 is coming soon) before deciding to wipe everything.
>
> @Krutika,
>
> I guess you will ask for the logs, so let's switch to gluster-users about this one ?
>
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 25 ноември 2019 г., 16:45:48 ч. Гринуич-5, Strahil Nikolov <hunter86_bg at yahoo.com> написа:
>
>
> Hi Krutika,
>
> I have enabled TRACE log level for the volume data_fast,
>
> but the issue is not much clear:
> FUSE reports:
>
> [2019-11-25 21:31:53.478130] I [MSGID: 133022] [shard.c:3674:shard_delete_shards] 0-data_fast-shard: Deleted shards of gfid=6d9ed2e5-d4f2-4749-839b-2f1
> 3a68ed472 from backend
> [2019-11-25 21:32:43.564694] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]
> [2019-11-25 21:32:43.565653] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]
> [2019-11-25 21:32:43.565689] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]
> [2019-11-25 21:32:43.565770] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 79 failed. Base file gfid = b0af2b81-22cf-482e-9b2f-c431b6449dae [Permission denied]
> [2019-11-25 21:32:43.565858] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 279: READ => -1 gfid=b0af2b81-22cf-482e-9b2f-c431b6449dae fd=0x7fbf40005ea8 (Permission denied)
>
>
> While the BRICK logs on ovirt1/gluster1 report:
> 2019-11-25 21:32:43.564177] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data_fast-io-threads: LOOKUP scheduled as fast priority fop
> [2019-11-25 21:32:43.564194] T [MSGID: 0] [defaults.c:2008:default_lookup_resume] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-io-threads to data_fast-upcall
> [2019-11-25 21:32:43.564206] T [MSGID: 0] [upcall.c:790:up_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-upcall to data_fast-leases
> [2019-11-25 21:32:43.564215] T [MSGID: 0] [defaults.c:2766:default_lookup] 0-stack-trace:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191128/670a1fe3/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glusterv7-running-ovirt.PNG
Type: image/png
Size: 71256 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191128/670a1fe3/attachment.png>


More information about the Gluster-users mailing list