<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <span dir="ltr">&lt;<a href="mailto:samppah@neutraali.net" target="_blank">samppah@neutraali.net</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div bgcolor="#FFFFFF" text="#000000">Hi again,<br>
<br>
here is more information regarding issue described earlier<br>
<br>
It looks like self healing is stuck. According to &quot;heal statistics&quot; 
crawl began at Sat Jan 20 12:56:19 2018 and it&#39;s still going on (It&#39;s 
around Sun Jan 21 20:30 when writing this). However glustershd.log says 
that last heal was completed at &quot;2018-01-20 11:00:13.090697&quot; (which is 
13:00 UTC+2). Also &quot;heal info&quot; has been running now for over 16 hours 
without any information. In statedump I can see that storage nodes have 
locks on files and some of those are blocked. Ie. Here again it says 
that ovirt8z2 is having active lock even ovirt8z2 crashed after the lock
 was granted.:<br>
<br>
[xlator.features.locks.zone2-<wbr>ssd1-vmstor1-locks.inode]<br>
path=/.shard/3d55f8cc-cda9-<wbr>489a-b0a3-fd0f43d67876.27<br>
mandatory=0<br>
inodelk-count=3<br>
lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0:self-<wbr>heal<br>
inodelk.inodelk[0](ACTIVE)=<wbr>type=WRITE, whence=0, start=0, len=0, pid = 
18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, 
connection-id=sto2z2.xxx-<wbr>10975-2018/01/20-10:56:14:<wbr>649541-zone2-ssd1-vmstor1-<wbr>client-0-0-0,
 granted at 2018-01-20 10:59:52<br>
lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0:<wbr>metadata<br>
lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0<br>
inodelk.inodelk[0](ACTIVE)=<wbr>type=WRITE, whence=0, start=0, len=0, pid = 
3420, owner=d8b9372c397f0000, client=0x7f8858410be0, 
connection-id=ovirt8z2.xxx.<wbr>com-5652-2017/12/27-09:49:02:<wbr>946825-zone2-ssd1-vmstor1-<wbr>client-0-7-0,
 granted at 2018-01-20 08:57:23<br>
inodelk.inodelk[1](BLOCKED)=<wbr>type=WRITE, whence=0, start=0, len=0, pid = 
18446744073709551610, owner=d0c6d857a87f0000, client=0x7f885845efa0, 
connection-id=sto2z2.xxx-<wbr>10975-2018/01/20-10:56:14:<wbr>649541-zone2-ssd1-vmstor1-<wbr>client-0-0-0,
 blocked at 2018-01-20 10:59:52<br>
<br>
I&#39;d also like to add that volume had arbiter brick before crash 
happened. We decided to remove it because we thought that it was causing
 issues. However now I think that this was unnecessary. After the crash 
arbiter logs had lots of messages like this:<br>
[2018-01-20 10:19:36.515717] I [MSGID: 115072] 
[server-rpc-fops.c:1640:<wbr>server_setattr_cbk] 0-zone2-ssd1-vmstor1-server:
 37374187: SETATTR &lt;gfid:a52055bd-e2e9-42dd-92a3-<wbr>e96b693bcafe&gt; 
(a52055bd-e2e9-42dd-92a3-<wbr>e96b693bcafe) ==&gt; (Operation not permitted) 
[Operation not permitted]<br>
<br>
Is there anyways to force self heal to stop? Any help would be very much
 appreciated :)<br></div></blockquote><div><br></div><div>The locks are contending in afr self-heal and data path domains. It&#39;s possible that the deadlock is not caused by the hypervisor as if that were the case, the locks should have been released when it crashed/disconnected.</div><div><br></div><div>Adding AFR devs to check what&#39;s causing the deadlock in the first place.</div><div><br></div><div>-Krutika</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<br>
Best regards,<br>
Samuli Heinonen<br>
<br>
<br>
<br>
<br>
<span>

</span><br>
<blockquote style="border:0px none" type="cite">
  <div style="margin:30px 25px 10px 25px" class="m_-9075664414007992794__pbConvHr"><div style="width:100%;border-top:2px solid #edf1f4;padding-top:10px">   <div style="display:inline-block;white-space:nowrap;vertical-align:middle;width:49%">
           <a href="mailto:samppah@neutraali.net" style="color:#485664!important;padding-right:6px;font-weight:500;text-decoration:none!important" target="_blank">Samuli Heinonen</a></div>   <div style="display:inline-block;white-space:nowrap;vertical-align:middle;width:48%;text-align:right">     <font color="#909AA4"><span style="padding-left:6px">20 
January 2018 at 21.57</span></font></div>    </div></div><div><div class="h5">
  <div style="color:#909aa4;margin-left:24px;margin-right:24px" class="m_-9075664414007992794__pbConvBody">Hi all!
<br>
<br>One hypervisor on our virtualization environment crashed and now 
some of 
the VM images cannot be accessed. After investigation we found out that 
there was lots of images that still had active lock on crashed 
hypervisor. We were able to remove locks from &quot;regular files&quot;, but it 
doesn&#39;t seem possible to remove locks from shards.
<br>
<br>We are running GlusterFS 3.8.15 on all nodes.
<br>
<br>Here is part of statedump that shows shard having active lock on 
crashed 
node:
<br>[xlator.features.locks.zone2-<wbr>ssd1-vmstor1-locks.inode]
<br>path=/.shard/75353c17-d6b8-<wbr>485d-9baf-fd6c700e39a1.21
<br>mandatory=0
<br>inodelk-count=1
<br>lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0:<wbr>metadata
<br>lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0:self-<wbr>heal
<br>lock-dump.domain.domain=zone2-<wbr>ssd1-vmstor1-replicate-0
<br>inodelk.inodelk[0](ACTIVE)=<wbr>type=WRITE, whence=0, start=0, len=0, pid
 = 
3568, owner=14ce372c397f0000, client=0x7f3198388770, connection-id 
ovirt8z2.xxx-5652-2017/12/27-<wbr>09:49:02:946825-zone2-ssd1-<wbr>vmstor1-client-1-7-0,
 
granted at 2018-01-20 08:57:24
<br>
<br>If we try to run clear-locks we get following error message:
<br># gluster volume clear-locks zone2-ssd1-vmstor1 
/.shard/75353c17-d6b8-485d-<wbr>9baf-fd6c700e39a1.21 kind all inode
<br>Volume clear-locks unsuccessful
<br>clear-locks getxattr command failed. Reason: Operation not permitted
<br>
<br>Gluster vol info if needed:
<br>Volume Name: zone2-ssd1-vmstor1
<br>Type: Replicate
<br>Volume ID: b6319968-690b-4060-8fff-<wbr>b212d2295208
<br>Status: Started
<br>Snapshot Count: 0
<br>Number of Bricks: 1 x 2 = 2
<br>Transport-type: rdma
<br>Bricks:
<br>Brick1: sto1z2.xxx:/ssd1/zone2-<wbr>vmstor1/export
<br>Brick2: sto2z2.xxx:/ssd1/zone2-<wbr>vmstor1/export
<br>Options Reconfigured:
<br>cluster.shd-wait-qlength: 10000
<br>cluster.shd-max-threads: 8
<br>cluster.locking-scheme: granular
<br>performance.low-prio-threads: 32
<br>cluster.data-self-heal-<wbr>algorithm: full
<br>performance.client-io-threads: off
<br>storage.linux-aio: off
<br>performance.readdir-ahead: on
<br>client.event-threads: 16
<br>server.event-threads: 16
<br>performance.strict-write-<wbr>ordering: off
<br>performance.quick-read: off
<br>performance.read-ahead: on
<br>performance.io-cache: off
<br>performance.stat-prefetch: off
<br>cluster.eager-lock: enable
<br>network.remote-dio: on
<br>cluster.quorum-type: none
<br>network.ping-timeout: 22
<br>performance.write-behind: off
<br>nfs.disable: on
<br>features.shard: on
<br>features.shard-block-size: 512MB
<br>storage.owner-uid: 36
<br>storage.owner-gid: 36
<br>performance.io-thread-count: 64
<br>performance.cache-size: 2048MB
<br>performance.write-behind-<wbr>window-size: 256MB
<br>server.allow-insecure: on
<br>cluster.ensure-durability: off
<br>config.transport: rdma
<br>server.outstanding-rpc-limit: 512
<br>diagnostics.brick-log-level: INFO
<br>
<br>Any recommendations how to advance from here?
<br>
<br>Best regards,
<br>Samuli Heinonen
<br>
<br>______________________________<wbr>_________________
<br>Gluster-users mailing list
<br><a class="m_-9075664414007992794moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<br><a class="m_-9075664414007992794moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a>
<br></div>
</div></div></blockquote>
<br>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>