<p dir="ltr">Hello Community,</p>
<p dir="ltr">I want to share what has happend to my Ovirt Lab.</p>
<p dir="ltr">After a patching from v6.5 to v6.6 everything seemed fine (yet I for got to test poweroff and poweron of VMs).</p>
<p dir="ltr">Then I have upgraded oVirt (minor release) and then I noticed some issues - every VM I had powered off was not able to poweron.<br>
Local fuse mount reports issues with the shards (see below). The strange thing was if I read a VM disk with dd (using root) everything was fine, after that vdsm' s dd was also running (before root fails).</p>
<p dir="ltr">I have upgraded to v7.0 and rebooted all nodes. Now everything is fine and running.<br></p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">---------- Forwarded message ----------<br>From: Strahil Nikolov <hunter86_bg@yahoo.com><br>Date: Nov 27, 2019 10:41<br>Subject: Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing<br>To: Nir Soffer <nsoffer@redhat.com>,Krutika Dhananjay <kdhananj@redhat.com><br>Cc: Rafi Kavungal Chundattu Parambil <rkavunga@redhat.com>,users <users@ovirt.org><br><br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:'courier new' , 'courier' , 'monaco' , monospace , sans-serif;font-size:16px"><div></div>
<div dir="ltr">Hi Nir,All,</div><div dir="ltr"><br /></div><div dir="ltr">it seems that 4.3.7 RC3 (and even RC4) are not the problem here(attached screenshot of oVirt running on v7 gluster).</div><div dir="ltr">It seems strange that both my serious issues with oVirt are related to gluster issue (1st gluster v3 to v5 migration and now this one).<br /></div><div dir="ltr"><br /></div><div dir="ltr">I have just updated to gluster v7.0 (Centos 7 repos), and rebooted all nodes.</div><div dir="ltr">Now both Engine and all my VMs are back online - so if you hit issues with 6.6 , you should give a try to 7.0 (and even 7.1 is coming soon) before deciding to wipe everything.</div><div dir="ltr"><br /></div><div dir="ltr">@Krutika,</div><div dir="ltr"><br /></div><div dir="ltr">I guess you will ask for the logs, so let's switch to gluster-users about this one ?</div><div dir="ltr"><br /></div><div dir="ltr">Best Regards,</div><div dir="ltr">Strahil Nikolov<br /></div><div><br /></div>
</div><div>
<div style="font-family:'helvetica neue' , 'helvetica' , 'arial' , sans-serif;font-size:13px;color:#26282a">
<div>
В понеделник, 25 ноември 2019 г., 16:45:48 ч. Гринуич-5, Strahil Nikolov <hunter86_bg@yahoo.com> написа:
</div>
<div><br /></div>
<div><br /></div>
<div><div><div><div style="font-family:'courier new' , 'courier' , 'monaco' , monospace , sans-serif;font-size:16px"><div></div>
<div dir="ltr">Hi Krutika,</div><div dir="ltr"><br clear="none" /></div><div dir="ltr">I have enabled TRACE log level for the volume data_fast,</div><div dir="ltr"><br clear="none" /></div><div dir="ltr">but the issue is not much clear:</div><div dir="ltr">FUSE reports:</div><div dir="ltr"><br clear="none" /></div><div dir="ltr"><div><div>[2019-11-25 21:31:53.478130] I [MSGID: 133022] [shard.c:3674:shard_delete_shards] 0-data_fast-shard: Deleted shards of gfid=6d9ed2e5-d4f2-4749-839b-2f1</div><div>3a68ed472 from backend</div><div>[2019-11-25 21:32:43.564694] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565653] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565689] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565770] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 79 failed. Base file gfid = b0af2b81-22cf-482e-9b2f-c431b6449dae [Permission denied]</div><div>[2019-11-25 21:32:43.565858] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 279: READ => -1 gfid=b0af2b81-22cf-482e-9b2f-c431b6449dae fd=0x7fbf40005ea8 (Permission denied)</div></div><br clear="none" /></div><div dir="ltr"><br clear="none" /></div><div dir="ltr">While the BRICK logs on ovirt1/gluster1 report:</div><div dir="ltr"><div><div>2019-11-25 21:32:43.564177] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data_fast-io-threads: LOOKUP scheduled as fast priority fop</div><div>[2019-11-25 21:32:43.564194] T [MSGID: 0] [defaults.c:2008:default_lookup_resume] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-io-threads to data_fast-upcall</div><div>[2019-11-25 21:32:43.564206] T [MSGID: 0] [upcall.c:790:up_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-upcall to data_fast-leases</div><div>[2019-11-25 21:32:43.564215] T [MSGID: 0] [defaults.c:2766:default_lookup] 0-stack-trace: </div></div></div></div></div></div></div></div></div></div></blockquote></div>