<div dir="ltr"><span style="font-size:12.8px">Is it possible that self-heal process on the kvm VM runs intensively and shutdown automatically the vm?</span><br><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Daniele</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-02-21 16:53 GMT+01:00 Alessandro Briosi <span dir="ltr"><<a href="mailto:ab1@metalit.com" target="_blank">ab1@metalit.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Il 21/02/2017 09:53, Alessandro Briosi ha scritto:<br>
> Hi all,<br>
> I have had a couple of times now a KVM VM which suddenly was shutdown<br>
> (whithout any apparent reason)<br>
><br>
> At the time this happened the only thing I can find in logs are related<br>
> to gluster:<br>
><br>
> Stops have happened at 16.19 on the 13th and at 03.34 on the 19th. (time<br>
> is local time which is GMT+1)<br>
> I though tink that gluster logs are in GMT.<br>
><br>
> This is from 1st node (which also runs the kvm and basically should be<br>
> client of itself)<br>
><br>
> [2017-02-07 22:29:15.030197] I [MSGID: 114035]<br>
> [client-handshake.c:202:<wbr>client_set_lk_version_cbk]<br>
> 0-datastore1-client-1: Server lk version = 1<br>
> [2017-02-19 05:22:07.747187] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1173:<wbr>afr_log_selfheal] 0-datastore1-replicate-0:<br>
> Completed data selfheal on 9e66f0d2-501<br>
> b-4cf9-80db-f423e2e2ef0f. sources=[1] sinks=0<br>
> r<br>
> This is from 2nd node:<br>
><br>
> [2017-02-07 22:29:15.044422] I [MSGID: 114035]<br>
> [client-handshake.c:202:<wbr>client_set_lk_version_cbk]<br>
> 0-datastore1-client-0: Server lk version = 1<br>
> [2017-02-08 00:13:58.612483] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1173:<wbr>afr_log_selfheal] 0-datastore1-replicate-0:<br>
> Completed data selfheal on b32ccae9-01e<br>
> d-406c-988f-64394e4cb37c. sources=[0] sinks=1<br>
> [2017-02-13 16:44:10.570176] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1173:<wbr>afr_log_selfheal] 0-datastore1-replicate-0:<br>
> Completed data selfheal on bc8f6a7e-31e<br>
> 5-4b48-946c-f779a4b2e64f. sources=[1] sinks=0<br>
> [2017-02-19 04:30:46.049524] I [MSGID: 108026]<br>
> [afr-self-heal-common.c:1173:<wbr>afr_log_selfheal] 0-datastore1-replicate-0:<br>
> Completed data selfheal on bc8f6a7e-31e<br>
> 5-4b48-946c-f779a4b2e64f. sources=[1] sinks=0<br>
><br>
> Could this be the cause?<br>
><br>
> This is current volume configuration, I'll be adding an additional node<br>
> in the near future, but need to have this stable before.<br>
><br>
> Volume Name: datastore1<br>
> Type: Replicate<br>
> Volume ID: e4dbbf6e-11e6-4b36-bab0-<wbr>c37647ef6ad6<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x 2 = 2<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: srvpve1g:/data/brick1/brick<br>
> Brick2: srvpve2g:/data/brick1/brick<br>
> Options Reconfigured:<br>
> nfs.disable: on<br>
> performance.readdir-ahead: on<br>
> transport.address-family: inet<br>
><br>
><br>
<br>
nobody has any clue on this?<br>
Should I provide more information/logs?<br>
<br>
For what I understand there was a healing triggered, but I have no idea<br>
on why this happened, and why the kvm was shutdown.<br>
Gluster client is supposed to be client of both servers for failover.<br>
Also there are other vm running and they did not get shutdown.<br>
<br>
Alessandro<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>