<div dir="ltr">Adding gluster-users. <br><br><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan <span dir="ltr"><<a href="mailto:kmisak@gmail.com" target="_blank">kmisak@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
here is the output from virt3 - problematic host:<br>
<br>
[root@virt3 ~]# gluster volume status<br>
Status of volume: data<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick virt1:/gluster/brick2/data 49152 0 Y 3536<br>
Brick virt2:/gluster/brick2/data 49152 0 Y 3557<br>
Brick virt3:/gluster/brick2/data 49152 0 Y 3523<br>
Self-heal Daemon on localhost N/A N/A Y 32056<br>
Self-heal Daemon on virt2 N/A N/A Y 29977<br>
Self-heal Daemon on virt1 N/A N/A Y 1788<br>
<br>
Task Status of Volume data<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
Status of volume: engine<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick virt1:/gluster/brick1/engine 49153 0 Y 3561<br>
Brick virt2:/gluster/brick1/engine 49153 0 Y 3570<br>
Brick virt3:/gluster/brick1/engine 49153 0 Y 3534<br>
Self-heal Daemon on localhost N/A N/A Y 32056<br>
Self-heal Daemon on virt2 N/A N/A Y 29977<br>
Self-heal Daemon on virt1 N/A N/A Y 1788<br>
<br>
Task Status of Volume engine<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
Status of volume: iso<br>
Gluster process TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick virt1:/gluster/brick4/iso 49154 0 Y 3585<br>
Brick virt2:/gluster/brick4/iso 49154 0 Y 3592<br>
Brick virt3:/gluster/brick4/iso 49154 0 Y 3543<br>
Self-heal Daemon on localhost N/A N/A Y 32056<br>
Self-heal Daemon on virt1 N/A N/A Y 1788<br>
Self-heal Daemon on virt2 N/A N/A Y 29977<br>
<br>
Task Status of Volume iso<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
and one of the logs.<br>
<br>
Thanks in advance<br>
<br>
Best regards,<br>
Misak Khachatryan<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose <<a href="mailto:sabose@redhat.com">sabose@redhat.com</a>> wrote:<br>
> Could you provide the output of "gluster volume status" and the gluster<br>
> mount logs to check further?<br>
> Are all the host shown as active in the engine (that is, is the monitoring<br>
> working?)<br>
><br>
> On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan <<a href="mailto:kmisak@gmail.com">kmisak@gmail.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> After upgrade to 4.2 i'm getting "VM paused due unknown storage<br>
>> error". When i was upgrading i had some gluster problem with one of<br>
>> the hosts, which i was fixed readding it to gluster peers. Now i see<br>
>> something weir in bricks configuration, see attachment - one of the<br>
>> bricks uses 0% of space.<br>
>><br>
>> How I can diagnose this? Nothing wrong in logs as I can see.<br>
>><br>
>><br>
>><br>
>><br>
>> Best regards,<br>
>> Misak Khachatryan<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>><br>
><br>
</div></div></blockquote></div><br></div></div></div>