<div><div dir="auto">Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue.</div><br><div class="gmail_quote"><div>On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav &lt;<a href="mailto:gyadav@redhat.com">gyadav@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Please send me the logs as well i.e glusterd.logs and <span class="m_3429018163532658876gmail-im">cmd_history.log. <br><br></span></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <span>&lt;<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
<br>
On 13/09/17 06:21, Gaurav Yadav wrote:<br>
</span></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>
Please provide the output of gluster volume info, gluster volume status and gluster peer status.<br>
<br>
Apart  from above info, please provide glusterd logs, cmd_history.log.<br>
<br>
Thanks<br>
Gaurav<br>
<br></span><span>
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek &lt;<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a> &lt;mailto:<a href="mailto:peljasz@yahoo.co.uk" target="_blank">peljasz@yahoo.co.uk</a>&gt;&gt; wrote:<br>
<br>
    hi everyone<br>
<br>
    I have 3-peer cluster with all vols in replica mode, 9<br>
    vols.<br>
    What I see, unfortunately, is one brick fails in one<br>
    vol, when it happens it&#39;s always the same vol on the<br>
    same brick.<br>
    Command: gluster vol status $vol - would show brick<br>
    not online.<br>
    Restarting glusterd with systemclt does not help, only<br>
    system reboot seem to help, until it happens, next time.<br>
<br>
    How to troubleshoot this weird misbehaviour?<br>
    many thanks, L.<br>
<br>
    .<br>
    _______________________________________________<br>
    Gluster-users mailing list<br>
    <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br></span></blockquote></blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
    &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;<br>
    <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
    &lt;<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a>&gt;<br>
<br>
<br>
</blockquote>
<br>
hi, here:<br>
<br>
$ gluster vol info C-DATA<br>
<br>
Volume Name: C-DATA<br>
Type: Replicate<br>
Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA<br>
Options Reconfigured:<br>
performance.md-cache-timeout: 600<br>
performance.cache-invalidation: on<br>
performance.stat-prefetch: on<br>
features.cache-invalidation-timeout: 600<br>
features.cache-invalidation: on<br>
performance.io-thread-count: 64<br>
performance.cache-size: 128MB<br>
cluster.self-heal-daemon: enable<br>
features.quota-deem-statfs: on<br>
changelog.changelog: on<br>
geo-replication.ignore-pid-check: on<br>
geo-replication.indexing: on<br>
features.inode-quota: on<br>
features.quota: on<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
transport.address-family: inet<br>
performance.cache-samba-metadata: on<br>
<br>
<br>
$ gluster vol status C-DATA<br>
Status of volume: C-DATA<br>
Gluster process                             TCP Port  RDMA Port Online  Pid<br>
------------------------------------------------------------------------------<br>
Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS<br>
TERs/0GLUSTER-C-DATA                     N/A       N/A N       N/A<br>
Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU<br>
STERs/0GLUSTER-C-DATA                    49152     0 Y       9376<br>
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS<br>
TERs/0GLUSTER-C-DATA                     49152     0 Y       8638<br>
Self-heal Daemon on localhost               N/A       N/A Y       387879<br>
Quota Daemon on localhost                   N/A       N/A Y       387891<br>
Self-heal Daemon on rider.private.ccnr.ceb.<br>
<a href="http://private.cam.ac.uk" rel="noreferrer" target="_blank">private.cam.ac.uk</a>                           N/A       N/A Y       16439<br>
Quota Daemon on rider.private.ccnr.ceb.priv<br>
<a href="http://ate.cam.ac.uk" rel="noreferrer" target="_blank">ate.cam.ac.uk</a>                               N/A       N/A Y       16451<br>
Self-heal Daemon on 10.5.6.32               N/A       N/A Y       7708<br>
Quota Daemon on 10.5.6.32                   N/A       N/A Y       8623<br>
Self-heal Daemon on 10.5.6.17               N/A       N/A Y       20549<br>
Quota Daemon on 10.5.6.17                   N/A       N/A Y       9337<br>
<br>
Task Status of Volume C-DATA<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks</blockquote></div></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_3429018163532658876HOEnZb"><div class="m_3429018163532658876h5"><br>
<br>
<br>
<br>
.<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a></div></div></blockquote></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div><div dir="ltr">-- <br></div><div class="gmail_signature" data-smartmail="gmail_signature">--Atin</div>