<div dir="ltr">Please send me the logs as well i.e glusterd.logs and <span class="gmail-im">cmd_history.log. <br><br></span></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <span dir="ltr"><<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 13/09/17 06:21, Gaurav Yadav wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
Please provide the output of gluster volume info, gluster volume status and gluster peer status.<br>
<br>
Apart from above info, please provide glusterd logs, cmd_history.log.<br>
<br>
Thanks<br>
Gaurav<br>
<br></span><span class="">
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a> <mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>>> wrote:<br>
<br>
hi everyone<br>
<br>
I have 3-peer cluster with all vols in replica mode, 9<br>
vols.<br>
What I see, unfortunately, is one brick fails in one<br>
vol, when it happens it's always the same vol on the<br>
same brick.<br>
Command: gluster vol status $vol - would show brick<br>
not online.<br>
Restarting glusterd with systemclt does not help, only<br>
system reboot seem to help, until it happens, next time.<br>
<br>
How to troubleshoot this weird misbehaviour?<br>
many thanks, L.<br>
<br>
.<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br></span>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
<<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a>><br>
<br>
<br>
</blockquote>
<br>
hi, here:<br>
<br>
$ gluster vol info C-DATA<br>
<br>
Volume Name: C-DATA<br>
Type: Replicate<br>
Volume ID: 18ffba73-532e-4a4d-84da-fceea5<wbr>2f8c2e<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.5.6.49:/__.aLocalStorages/0<wbr>/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick2: 10.5.6.100:/__.aLocalStorages/<wbr>0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick3: 10.5.6.32:/__.aLocalStorages/0<wbr>/0-GLUSTERs/0GLUSTER-C-DATA<br>
Options Reconfigured:<br>
performance.md-cache-timeout: 600<br>
performance.cache-invalidation<wbr>: on<br>
performance.stat-prefetch: on<br>
features.cache-invalidation-ti<wbr>meout: 600<br>
features.cache-invalidation: on<br>
performance.io-thread-count: 64<br>
performance.cache-size: 128MB<br>
cluster.self-heal-daemon: enable<br>
features.quota-deem-statfs: on<br>
changelog.changelog: on<br>
geo-replication.ignore-pid-che<wbr>ck: on<br>
geo-replication.indexing: on<br>
features.inode-quota: on<br>
features.quota: on<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
transport.address-family: inet<br>
performance.cache-samba-metada<wbr>ta: on<br>
<br>
<br>
$ gluster vol status C-DATA<br>
Status of volume: C-DATA<br>
Gluster process <wbr> TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick 10.5.6.49:/__.aLocalStorages/0<wbr>/0-GLUS<br>
TERs/0GLUSTER-C-DATA <wbr> N/A N/A N N/A<br>
Brick 10.5.6.100:/__.aLocalStorages/<wbr>0/0-GLU<br>
STERs/0GLUSTER-C-DATA <wbr> 49152 0 Y 9376<br>
Brick 10.5.6.32:/__.aLocalStorages/0<wbr>/0-GLUS<br>
TERs/0GLUSTER-C-DATA <wbr> 49152 0 Y 8638<br>
Self-heal Daemon on localhost N/A N/A Y 387879<br>
Quota Daemon on localhost N/A N/A Y 387891<br>
Self-heal Daemon on rider.private.ccnr.ceb.<br>
<a href="http://private.cam.ac.uk" rel="noreferrer" target="_blank">private.cam.ac.uk</a> <wbr> N/A N/A Y 16439<br>
Quota Daemon on rider.private.ccnr.ceb.priv<br>
<a href="http://ate.cam.ac.uk" rel="noreferrer" target="_blank">ate.cam.ac.uk</a> <wbr> N/A N/A Y 16451<br>
Self-heal Daemon on 10.5.6.32 N/A N/A Y 7708<br>
Quota Daemon on 10.5.6.32 N/A N/A Y 8623<br>
Self-heal Daemon on 10.5.6.17 N/A N/A Y 20549<br>
Quota Daemon on 10.5.6.17 N/A N/A Y 9337<br>
<br>
Task Status of Volume C-DATA<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
.<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></div></div></blockquote></div><br></div>