<div dir="ltr"><div>Sorry about the late response.</div><div><br></div><div>I looked at the logs. These errors are originating from posix-acl translator -</div><div><i>[2019-11-17 07:55:47.090065] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162496: LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.6 (be318638-e8a0-4c6d-977d-7a937aa84806/5985adcb-0f4d-4317-8a26-1652973a2350.6), client: CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, error-xlator: data_fast-access-control [Permission denied]<br>[2019-11-17 07:55:47.090174] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-data_fast-access-control: client: CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission denied]<br>[2019-11-17 07:55:47.090209] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162497: LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.7 (be318638-e8a0-4c6d-977d-7a937aa84806/5985adcb-0f4d-4317-8a26-1652973a2350.7), client: CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, error-xlator: data_fast-access-control [Permission denied]<br>[2019-11-17 07:55:47.090299] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-data_fast-access-control: client: CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:36,gid:36,perm:1,ngrps:3), ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission denied]</i></div><div><i><br></i></div><div>Jiffin/Raghavendra Talur,</div><div>Can you help?</div><div><br></div><div>-Krutika<br></div><div><i></i></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Nov 27, 2019 at 2:11 PM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:courier new,courier,monaco,monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">Hi Nir,All,</div><div dir="ltr"><br></div><div dir="ltr">it seems that 4.3.7 RC3 (and even RC4) are not the problem here(attached screenshot of oVirt running on v7 gluster).</div><div dir="ltr">It seems strange that both my serious issues with oVirt are related to gluster issue (1st gluster v3 to v5 migration and now this one).<br></div><div dir="ltr"><br></div><div dir="ltr">I have just updated to gluster v7.0 (Centos 7 repos), and rebooted all nodes.</div><div dir="ltr">Now both Engine and all my VMs are back online - so if you hit issues with 6.6 , you should give a try to 7.0 (and even 7.1 is coming soon) before deciding to wipe everything.</div><div dir="ltr"><br></div><div dir="ltr">@Krutika,</div><div dir="ltr"><br></div><div dir="ltr">I guess you will ask for the logs, so let's switch to gluster-users about this one ?</div><div dir="ltr"><br></div><div dir="ltr">Best Regards,</div><div dir="ltr">Strahil Nikolov<br></div><div><br></div>
</div><div id="gmail-m_-9210046325963362456ydpd955d419yahoo_quoted_5277723870">
<div style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
<div>
В понеделник, 25 ноември 2019 г., 16:45:48 ч. Гринуич-5, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674"><div><div style="font-family:courier new,courier,monaco,monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">Hi Krutika,</div><div dir="ltr"><br clear="none"></div><div dir="ltr">I have enabled TRACE log level for the volume data_fast,</div><div dir="ltr"><br clear="none"></div><div dir="ltr">but the issue is not much clear:</div><div dir="ltr">FUSE reports:</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><div><div>[2019-11-25 21:31:53.478130] I [MSGID: 133022] [shard.c:3674:shard_delete_shards] 0-data_fast-shard: Deleted shards of gfid=6d9ed2e5-d4f2-4749-839b-2f1</div><div>3a68ed472 from backend</div><div>[2019-11-25 21:32:43.564694] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565653] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565689] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-25 21:32:43.565770] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 79 failed. Base file gfid = b0af2b81-22cf-482e-9b2f-c431b6449dae [Permission denied]</div><div>[2019-11-25 21:32:43.565858] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 279: READ => -1 gfid=b0af2b81-22cf-482e-9b2f-c431b6449dae fd=0x7fbf40005ea8 (Permission denied)</div></div><br clear="none"></div><div dir="ltr"><br clear="none"></div><div dir="ltr">While the BRICK logs on ovirt1/gluster1 report:</div><div dir="ltr"><div><div>2019-11-25 21:32:43.564177] D [MSGID: 0] [io-threads.c:376:iot_schedule] 0-data_fast-io-threads: LOOKUP scheduled as fast priority fop</div><div>[2019-11-25 21:32:43.564194] T [MSGID: 0] [defaults.c:2008:default_lookup_resume] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-io-threads to data_fast-upcall</div><div>[2019-11-25 21:32:43.564206] T [MSGID: 0] [upcall.c:790:up_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-upcall to data_fast-leases</div><div>[2019-11-25 21:32:43.564215] T [MSGID: 0] [defaults.c:2766:default_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-leases to data_fast-read-only</div><div>[2019-11-25 21:32:43.564222] T [MSGID: 0] [defaults.c:2766:default_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-read-only to data_fast-worm</div><div>[2019-11-25 21:32:43.564230] T [MSGID: 0] [defaults.c:2766:default_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-worm to data_fast-locks</div><div>[2019-11-25 21:32:43.564241] T [MSGID: 0] [posix.c:2897:pl_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-locks to data_fast-access-control</div><div>[2019-11-25 21:32:43.564254] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-data_fast-access-control: client: CTX_ID:dae9ffad-6acd-4a43-9372-229a3018fde9-GRAPH_ID:0-PID:11468-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4), ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-) [Permission denied]</div><div>[2019-11-25 21:32:43.564268] D [MSGID: 0] [posix-acl.c:1057:posix_acl_lookup] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-access-control returned -1 error: Permission denied [Permission denied]</div><div>[2019-11-25 21:32:43.564279] D [MSGID: 0] [posix.c:2888:pl_lookup_cbk] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-locks returned -1 error: Permission denied [Permission denied]</div><div>[2019-11-25 21:32:43.564289] D [MSGID: 0] [upcall.c:769:up_lookup_cbk] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-upcall returned -1 error: Permission denied [Permission denied]</div><div>[2019-11-25 21:32:43.564302] D [MSGID: 0] [defaults.c:1349:default_lookup_cbk] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-io-threads returned -1 error: Permission denied [Permission denied]</div><div>[2019-11-25 21:32:43.564313] T [marker.c:2918:marker_lookup_cbk] 0-data_fast-marker: lookup failed with Permission denied</div><div>[2019-11-25 21:32:43.564320] D [MSGID: 0] [marker.c:2955:marker_lookup_cbk] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-marker returned -1 error: Permission denied [Permission denied]</div><div>[2019-11-25 21:32:43.564334] D [MSGID: 0] [index.c:2070:index_lookup_cbk] 0-stack-trace: stack-address: 0x7fc02c00bbf8, data_fast-index returned -1 error: Permission denied [Permission denied]</div></div><br clear="none"></div><div dir="ltr"><br clear="none"></div><div><br clear="none"></div>
</div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yahoo_quoted_5573493689">
<div style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
<div>
В понеделник, 25 ноември 2019 г., 23:10:41 ч. Гринуич+2, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> написа:
</div>
<div><br clear="none"></div>
<div><br clear="none"></div>
<div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674yqt79750"><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408"><div><div style="font-family:courier new,courier,monaco,monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">Hi Krutika,</div><div dir="ltr"><br clear="none"></div><div dir="ltr">thanks for your assistance.</div><div dir="ltr">Let me summarize some info about the volume:</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><div><div>Volume Name: data_fast</div><div>Type: Replicate</div><div>Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: gluster1:/gluster_bricks/data_fast/data_fast</div><div>Brick2: gluster2:/gluster_bricks/data_fast/data_fast</div><div>Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)</div><div>Options Reconfigured:</div><div>performance.client-io-threads: on</div><div>nfs.disable: on</div><div>transport.address-family: inet</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.low-prio-threads: 32</div><div>network.remote-dio: on</div><div>cluster.eager-lock: enable</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>cluster.data-self-heal-algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-max-threads: 8</div><div>cluster.shd-wait-qlength: 10000</div><div>features.shard: on</div><div>user.cifs: off</div><div>cluster.choose-local: on</div><div>client.event-threads: 4</div><div>server.event-threads: 4</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>performance.strict-o-direct: on</div><div>network.ping-timeout: 30</div><div>cluster.granular-entry-heal: enable</div><div>cluster.enable-shared-storage: enable</div><div><br clear="none"></div><div><br clear="none"></div><div>[root@ovirt1 ~]# gluster volume get engine all | grep shard</div><div>features.shard on</div><div>features.shard-block-size 64MB</div><div>features.shard-lru-limit 16384</div><div>features.shard-deletion-rate 100</div></div><br clear="none"></div><div dir="ltr"><br clear="none"></div></div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yahoo_quoted_4753456371"><div style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)"><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952"><div dir="ltr"><div><div dir="ltr">On Sat, Nov 23, 2019 at 3:14 AM Nir Soffer <<a shape="rect" href="mailto:nsoffer@redhat.com" rel="nofollow" target="_blank">nsoffer@redhat.com</a>> wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Nov 22, 2019 at 10:41 PM Strahil Nikolov <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="nofollow" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br clear="none"></div><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div></div>
<div><span style="color:rgb(38,40,42)">On Thu, Nov 21, 2019 at 8:20 AM Sahina Bose <</span><a shape="rect" href="mailto:sabose@redhat.com" rel="nofollow" target="_blank">sabose@redhat.com</a><span style="color:rgb(38,40,42)">> wrote:</span><br clear="none"></div></div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br clear="none"></div><br clear="none"><div><div dir="ltr">On Thu, Nov 21, 2019 at 6:03 AM Strahil Nikolov <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="nofollow" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div></div>
<div dir="ltr">Hi All,</div><div dir="ltr"><br clear="none"></div><div dir="ltr">another clue in the logs :</div><div dir="ltr"><div><div>[2019-11-21 00:29:50.536631] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:29:50.536798] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:29:50.536959] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:29:50.537007] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 79 failed. Base file gfid = b0af2b81-22cf-482e-9b2f-c431b6449dae [Permission denied]</div><div>[2019-11-21 00:29:50.537066] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 12458: READ => -1 gfid=b0af2b81-22cf-482e-9b2f-c431b6449dae fd=0x7fc63c00fe18 (Permission denied)</div><div>[2019-11-21 00:30:01.177665] I [MSGID: 133022] [shard.c:3674:shard_delete_shards] 0-data_fast-shard: Deleted shards of gfid=eb103fbf-80dc-425d-882f-1e4efe510db5 from backend</div><div>[2019-11-21 00:30:13.132756] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/17c663c2-f582-455b-b806-3b9d01fb2c6c.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:13.132824] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/17c663c2-f582-455b-b806-3b9d01fb2c6c.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:13.133217] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/17c663c2-f582-455b-b806-3b9d01fb2c6c.79 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:13.133238] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 79 failed. Base file gfid = 17c663c2-f582-455b-b806-3b9d01fb2c6c [Permission denied]</div><div>[2019-11-21 00:30:13.133264] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 12660: READ => -1 gfid=17c663c2-f582-455b-b806-3b9d01fb2c6c fd=0x7fc63c007038 (Permission denied)</div><div>[2019-11-21 00:30:38.489449] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0: remote operation failed. Path: /.shard/a10a5ae8-108b-4d78-9e65-cca188c27fc4.6 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:38.489520] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1: remote operation failed. Path: /.shard/a10a5ae8-108b-4d78-9e65-cca188c27fc4.6 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:38.489669] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2: remote operation failed. Path: /.shard/a10a5ae8-108b-4d78-9e65-cca188c27fc4.6 (00000000-0000-0000-0000-000000000000) [Permission denied]</div><div>[2019-11-21 00:30:38.489717] E [MSGID: 133010] [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on shard 6 failed. Base file gfid = a10a5ae8-108b-4d78-9e65-cca188c27fc4 [Permission denied]</div><div>[2019-11-21 00:30:38.489777] W [fuse-bridge.c:2830:fuse_readv_cbk] 0-glusterfs-fuse: 12928: READ => -1 gfid=a10a5ae8-108b-4d78-9e65-cca188c27fc4 fd=0x7fc63c01a058 (Permission denied)</div></div><br clear="none"></div><div dir="ltr"><br clear="none"></div><div dir="ltr">Anyone got an idea why is it happening?</div><div dir="ltr">I checked user/group and selinux permissions - all OK</div></div></div></blockquote></div></div></blockquote><div><br clear="none"></div><div>>Can you share the commands (and output) used to check this?</div><div dir="ltr">I first thought that the file is cached in memory and that's why vdsm user can read the file , but the following shows opposite:</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# ll</div><div>total 562145</div><div>-rw-rw----. 1 vdsm kvm 5368709120 Nov 12 23:29 5b1d3113-5cca-4582-9029-634b16338a2f</div><div>-rw-rw----. 1 vdsm kvm 1048576 Nov 11 14:11 5b1d3113-5cca-4582-9029-634b16338a2f.lease</div><div>-rw-r--r--. 1 vdsm kvm 313 Nov 11 14:11 5b1d3113-5cca-4582-9029-634b16338a2f.meta</div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# pwd</div><div>/rhev/data-center/mnt/glusterSD/gluster1:_data__fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/94f763e9-fd96-4bee-a6b2-31af841a918b</div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# echo 3 > /proc/sys/vm/drop_caches </div></div></div></div></div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>I would use iflag=direct instead, no need to mess with caches. Vdsm always use direct I/O.</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div><div dir="ltr"><div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# sudo -u vdsm dd if=5b1d3113-5cca-4582-9029-634b16338a2f of=/dev/null bs=4M status=progress</div><div>dd: error reading ‘5b1d3113-5cca-4582-9029-634b16338a2f’: Permission denied</div></div></div></div></div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>You got permissions denied...</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div><div dir="ltr"><div><div>16+0 records in</div><div>16+0 records out</div><div>67108864 bytes (67 MB) copied, 0.198372 s, 338 MB/s</div></div></div></div></div></div></div></div></div></div></blockquote></div></div></blockquote><div> </div><div>>Seems like it could read upto ~67MB successfully before it encountered 'Permission denied' errors. Assuming a shard-block-size >of 64MB, looks like all the shards under /.shard could not be accessed.</div><div>>Could you share the following pieces of information:</div><div>>1. brick logs of data_fast</div><div dir="ltr">Attached in <span>data_fast-brick-logs.tgz</span></div><div dir="ltr"><span><br clear="none"></span></div><div>>2. ls -la of .shard relative to the bricks (NOT the mount) on all the bricks of data_fast<br clear="none"></div><div dir="ltr">Not sure if I understood you correctly, so I ran "ls -lad /gluster_bricks/data_fast/data_fast/.shard". If it's not what you wanted to see - just correct me.</div><div dir="ltr">I have run multiple "find" commands with "-exec chown vdsm:kvm {} \;" , just to be sure that this is not happening.</div><div>>3. and ls -la of all shards under .shard of data_fast (perhaps a handful of them have root permission assigned somehow which is causing access to be denied? Perhaps while resolving pending heals manually? )<br clear="none"></div><div dir="ltr">All shards seem to be owned by "vdsm:kvm" with 660.</div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408yqtfd67625"><div dir="ltr"><br clear="none"></div><div dir="ltr"><br clear="none"></div><div><br clear="none"></div><div>-Krutika<div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952yqtfd71285"><br clear="none"></div></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952yqtfd53515"><div dir="ltr"><div><div><br clear="none"></div><div>And dd continue to read data?!</div><div><br clear="none"></div><div>I have never seen anything like this.</div><div><br clear="none"></div><div>It will be helpful to run this with strace:</div><div><br clear="none"></div><div> strace -t -TT -o dd.strace dd if=vol-id of=/dev/null iflag=direct bs=8M status=progress</div><div><br clear="none"></div><div>And share dd.strace.</div><div><br clear="none"></div><div>Logs in /var/log/glusterfs/exportname.log will contain useful info for this test.</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div><div dir="ltr"><div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# dd if=5b1d3113-5cca-4582-9029-634b16338a2f of=/dev/null bs=4M status=progress</div><div>5356126208 bytes (5.4 GB) copied, 12.061393 s, 444 MB/s</div><div>1280+0 records in</div><div>1280+0 records out</div><div>5368709120 bytes (5.4 GB) copied, 12.0876 s, 444 MB/s</div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# sudo -u vdsm dd if=5b1d3113-5cca-4582-9029-634b16338a2f of=/dev/null bs=4M status=progress</div><div>3598712832 bytes (3.6 GB) copied, 1.000540 s, 3.6 GB/s</div><div>1280+0 records in</div><div>1280+0 records out</div><div>5368709120 bytes (5.4 GB) copied, 1.47071 s, 3.7 GB/s</div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# echo 3 > /proc/sys/vm/drop_caches </div><div>[root@ovirt1 94f763e9-fd96-4bee-a6b2-31af841a918b]# sudo -u vdsm dd if=5b1d3113-5cca-4582-9029-634b16338a2f of=/dev/null bs=4M status=progress</div><div>5171576832 bytes (5.2 GB) copied, 12.071837 s, 428 MB/s</div><div>1280+0 records in</div><div>1280+0 records out</div><div>5368709120 bytes (5.4 GB) copied, 12.4873 s, 430 MB/s</div></div><br clear="none"></div><div dir="ltr">As you can see , once root user reads the file -> vdsm user can also do that.</div></div></div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>Smells like issue on gluster side.</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div dir="ltr">>I would try this on the hypervisor to check what vdsm/qemu see:<br clear="none"></div><div><br clear="none"></div><div>>$ ls -lahRZ /rhv/data-center/mnt/glusterSD/gluster-server:_path</div><div dir="ltr">I'm attaching the output of the find I run, but this one should be enough:</div><div dir="ltr"><div><div>[root@ovirt1 ~]# find /rhev/data-center/mnt/glusterSD/*/[0-9]*/images/ -not -user vdsm -print</div><div></div></div></div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>A full output of ls -lahRZ, showing user, group, permissions bits, and selinux label</div><div>of the entire tree will be more useful.</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div>>Also, to make sure we don't have selinux issue on the hypervisor, you can change</div><div>>selinux to permissive mode:</div><div><br clear="none"></div><div> > setenforce 0</div><div><br clear="none"></div><div dir="ltr">This is the first thing I did and the systems were still in permissive when I tried again.I'm 99.99% sure it's not selinux.</div><div dir="ltr"><br clear="none"></div><div dir="ltr"><br clear="none"></div><div>>And then try again. If this was selinux issue the permission denied issue will disappear.</div><div>>If this is the case please provide the output of:</div><div><br clear="none"></div><div> > ausearh -m AVC -ts today</div><div><br clear="none"></div><div>>If the issue still exists, we eliminated selinux, and you can enable it again:</div><div><br clear="none"></div><div> > setenforce 1</div><div><br clear="none"></div><div dir="ltr"><span></span><div><div>[root@ovirt3 ~]# ausearch -m AVC -ts today</div><div><no matches></div></div><div><div>[root@ovirt2 ~]# ausearch -m AVC -ts today</div><div><no matches></div></div></div><div dir="ltr"><div><div>[root@ovirt1 ~]# ausearch -m AVC -ts today</div><div><no matches></div></div></div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>So this is not selinux on the hypervisor. I wonder if it can be selinux on the gluster side?</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yahoo_quoted_5167140539"><div><div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952gmail-m_8870908515832721209gmail-m_-7719382260470531474ydpa8797ff6yiv6914405601"><div dir="ltr"><div dir="ltr">I have a vague feeling that the issue is related to gluster v6.5 to 6.6 upgrade which I several days before... So if any logs are needed (or debug enabled), just mention.</div></div></div></div></div></div></blockquote><div><br clear="none"></div><div>If this is the last change, and it worked before, most likely.</div><div><br clear="none"></div><div>Nir</div></div></div></div>
_______________________________________________<br clear="none">
Users mailing list -- <a shape="rect" href="mailto:users@ovirt.org" rel="nofollow" target="_blank">users@ovirt.org</a><br clear="none">
To unsubscribe send an email to <a shape="rect" href="mailto:users-leave@ovirt.org" rel="nofollow" target="_blank">users-leave@ovirt.org</a><br clear="none">
Privacy Statement: <a shape="rect" href="https://www.ovirt.org/site/privacy-policy/" rel="nofollow" target="_blank">https://www.ovirt.org/site/privacy-policy/</a><br clear="none">
oVirt Code of Conduct: <a shape="rect" href="https://www.ovirt.org/community/about/community-guidelines/" rel="nofollow" target="_blank">https://www.ovirt.org/community/about/community-guidelines/</a><br clear="none">
List Archives: <a shape="rect" href="https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKLLOJKG6NEJUB264RA5PLQMGWNG3GD3/" rel="nofollow" target="_blank">https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKLLOJKG6NEJUB264RA5PLQMGWNG3GD3/</a><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408ydpfeac9cd8yiv8780378952yqtfd19812"><br clear="none">
</div></blockquote></div></div></div></div></div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408yqtfd08220">
</div></div><div id="gmail-m_-9210046325963362456ydpd955d419yiv9199013674ydpd8d24ca1yiv7288614408yqtfd23223">
</div></div></div></div></div></div>
</div>
</div></div></div></div>
</div>
</div></div></blockquote></div>