<div dir="ltr">Hi,<div><br></div><div><br><div class="gmail_extra"><br><div class="gmail_quote">On 16 October 2018 at 18:20, <span dir="ltr"><<a href="mailto:jring@mail.de" target="_blank">jring@mail.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi everybody,<br>
<br>
I have created a distributed dispersed volume on 4.1.5 under centos7 like this a few days ago:<br>
<br>
gluster volume create data_vol1 disperse-data 4 redundancy 2 transport tcp \<br>
\<br>
gf-p-d-01.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
gf-p-d-03.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
gf-p-d-04.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
gf-p-k-01.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
gf-p-k-03.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
gf-p-k-04.isec.foobar.com:/<wbr>bricks/brick1/brick \<br>
\<br>
gf-p-d-01.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
gf-p-d-03.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
gf-p-d-04.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
gf-p-k-01.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
gf-p-k-03.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
gf-p-k-04.isec.foobar.com:/<wbr>bricks/brick2/brick \<br>
\<br>
... same for brick3 to brick9...<br>
\<br>
gf-p-d-01.isec.foobar.com:/<wbr>bricks/brick10/brick \<br>
gf-p-d-03.isec.foobar.com:/<wbr>bricks/brick10/brick \<br>
gf-p-d-04.isec.foobar.com:/<wbr>bricks/brick10/brick \<br>
gf-p-k-01.isec.foobar.com:/<wbr>bricks/brick10/brick \<br>
gf-p-k-03.isec.foobar.com:/<wbr>bricks/brick10/brick \<br>
gf-p-k-04.isec.foobar.com:/<wbr>bricks/brick10/brick<br>
<br>
This worked nicely and resulted in the following filesystem:<br>
[root@gf-p-d-01 ~]# df -h /data/<br>
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf<br>
gf-p-d-01.isec.foobar.com:/<wbr>data_vol1 219T 2,2T 217T 2% /data<br>
<br>
Each of the bricks resides on its own 6TB disk with 1 big partition formated with xfs.<br>
<br>
Yesterday a colleague looked at the filesystem and found some space missing...<br>
[root@gf-p-d-01 ~]# df -h /data/<br>
Filesystem Size Used Avail Use% Mounted on<br>
gf-p-d-01.isec.foobar.com:/<wbr>data_vol1 22T 272G 22T 2% /data<br>
<br>
Some googling brought the following bug report against 3.4 which looks familiar:<br>
<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1541830" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1541830</a><br>
<br>
So we did a quick grep shared-brick-count /var/lib/glusterd/vols/data_<wbr>vol1/* on all boxes and found that on 5 out of 6 boxes this was shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks. <br>
<br>
Is this the expected result or should we have all 1 everywhere (as the quick fix script from the case sets it)?<br></blockquote><div><br></div><div>No , this is fine. The shared-brick-count only needs to be 1 for the local bricks. The value for the remote bricks can be 0.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Also on one box (the one where we created the volume from, btw) we have shared-brick-count=0 for all remote bricks and 10 for the local bricks.<br></blockquote><div><br></div><div>This is a problem. The shared-brick-count should be 1 for the local bricks here as well.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Is it possible that the bug from 3.4 still exists in 4.1.5 and should we try the filter script which sets shared-brick-count=1 for all bricks?<br>
<br></blockquote><div><br></div><div>Can you try </div><div>1. restarting glusterd on all the nodes one after another (not at the same time)</div><div>2. Setting a volume option (say gluster volume set <volname> cluster.min-free-disk 11%) </div><div><br></div><div>and see if it fixes the issue?</div><div><br></div><div>Regards,</div><div>Nithya</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The volume is not currently in production so now would be the time to play around and find the problem...<br>
<br>
TIA and regards,<br>
<br>
Joachim<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-------<br>
FreeMail powered by <a href="http://mail.de" rel="noreferrer" target="_blank">mail.de</a> - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></blockquote></div><br></div></div></div>