<div dir="ltr">These symptoms appear to be the same as I've recorded in this post:<br><br><a href="http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html">http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html</a><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:atin.mukherjee83@gmail.com" target="_blank">atin.mukherjee83@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div dir="auto">Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue.</div><div><div class="h5"><br><div class="gmail_quote"><div>On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <<a href="mailto:gyadav@redhat.com" target="_blank">gyadav@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Please send me the logs as well i.e glusterd.logs and <span class="m_4035032029774728941m_3429018163532658876gmail-im">cmd_history.log. <br><br></span></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <span><<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
<br>
On 13/09/17 06:21, Gaurav Yadav wrote:<br>
</span></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>
Please provide the output of gluster volume info, gluster volume status and gluster peer status.<br>
<br>
Apart from above info, please provide glusterd logs, cmd_history.log.<br>
<br>
Thanks<br>
Gaurav<br>
<br></span><span>
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a> <mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>> wrote:<br>
<br>
hi everyone<br>
<br>
I have 3-peer cluster with all vols in replica mode, 9<br>
vols.<br>
What I see, unfortunately, is one brick fails in one<br>
vol, when it happens it's always the same vol on the<br>
same brick.<br>
Command: gluster vol status $vol - would show brick<br>
not online.<br>
Restarting glusterd with systemclt does not help, only<br>
system reboot seem to help, until it happens, next time.<br>
<br>
How to troubleshoot this weird misbehaviour?<br>
many thanks, L.<br>
<br>
.<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br></span></blockquote></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><wbr>><br>
<br>
<br>
</blockquote>
<br>
hi, here:<br>
<br>
$ gluster vol info C-DATA<br>
<br>
Volume Name: C-DATA<br>
Type: Replicate<br>
Volume ID: 18ffba73-532e-4a4d-84da-<wbr>fceea52f8c2e<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick2: 10.5.6.100:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick3: 10.5.6.32:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Options Reconfigured:<br>
performance.md-cache-timeout: 600<br>
performance.cache-<wbr>invalidation: on<br>
performance.stat-prefetch: on<br>
features.cache-invalidation-<wbr>timeout: 600<br>
features.cache-invalidation: on<br>
performance.io-thread-count: 64<br>
performance.cache-size: 128MB<br>
cluster.self-heal-daemon: enable<br>
features.quota-deem-statfs: on<br>
changelog.changelog: on<br>
geo-replication.ignore-pid-<wbr>check: on<br>
geo-replication.indexing: on<br>
features.inode-quota: on<br>
features.quota: on<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
transport.address-family: inet<br>
performance.cache-samba-<wbr>metadata: on<br>
<br>
<br>
$ gluster vol status C-DATA<br>
Status of volume: C-DATA<br>
Gluster process <wbr> TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick 10.5.6.49:/__.aLocalStorages/<wbr>0/0-GLUS<br>
TERs/0GLUSTER-C-DATA <wbr> N/A N/A N N/A<br>
Brick 10.5.6.100:/__.aLocalStorages/<wbr>0/0-GLU<br>
STERs/0GLUSTER-C-DATA <wbr> 49152 0 Y 9376<br>
Brick 10.5.6.32:/__.aLocalStorages/<wbr>0/0-GLUS<br>
TERs/0GLUSTER-C-DATA <wbr> 49152 0 Y 8638<br>
Self-heal Daemon on localhost N/A N/A Y 387879<br>
Quota Daemon on localhost N/A N/A Y 387891<br>
Self-heal Daemon on rider.private.ccnr.ceb.<br>
<a href="http://private.cam.ac.uk" rel="noreferrer" target="_blank">private.cam.ac.uk</a> <wbr> N/A N/A Y 16439<br>
Quota Daemon on rider.private.ccnr.ceb.priv<br>
<a href="http://ate.cam.ac.uk" rel="noreferrer" target="_blank">ate.cam.ac.uk</a> <wbr> N/A N/A Y 16451<br>
Self-heal Daemon on 10.5.6.32 N/A N/A Y 7708<br>
Quota Daemon on 10.5.6.32 N/A N/A Y 8623<br>
Self-heal Daemon on 10.5.6.17 N/A N/A Y 20549<br>
Quota Daemon on 10.5.6.17 N/A N/A Y 9337<br>
<br>
Task Status of Volume C-DATA<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks</blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_4035032029774728941m_3429018163532658876HOEnZb"><div class="m_4035032029774728941m_3429018163532658876h5"><br>
<br>
<br>
<br>
.<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></div></div></blockquote></div></div>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></blockquote></div></div></div></div><span class="HOEnZb"><font color="#888888"><div dir="ltr">-- <br></div><div class="m_4035032029774728941gmail_signature" data-smartmail="gmail_signature">--Atin</div>
</font></span><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>