<div dir="ltr">Hi Nithya,<div><br></div><div>My initial setup was composed of 2 similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a new node: stor3data (2 bricks per volume).</div><div>Of course, then to add the new peer with the bricks I did the 'balance force' operation. This task finished successfully (you can see info below) and number of files on the 3 nodes were very similar .</div><div><br></div><div>For volumedisk1 I only have files of 500MB and they are continuosly written in sequential mode. The filename pattern of written files is:</div><div><br></div><div>run.node1.0000.rd </div><div>run.node2.0000.rd </div><div><div>run.node1.0001.rd </div><div>run.node2.0001.rd </div></div><div><div>run.node1.0002.rd </div><div>run.node2.0002.rd </div></div><div>...........</div><div>...........</div><div><div>run.node1.X.rd </div><div>run.node2.X.rd </div></div><div><br></div><div>( X ranging from 0000 to infinite )</div><div><br></div><div>Curiously stor1data and stor2data maintain similar ratios in bytes:</div><div><br></div><div>Filesystem 1K-blocks Used Available Use% Mounted on<br></div><div>/dev/sdc1 52737613824 17079174264 35658439560 33% /mnt/glusterfs/vol1 -> stor1data</div><div>/dev/sdc1 52737613824 17118810848 35618802976 33% /mnt/glusterfs/vol1 -> stor2data<br></div><div><br></div><div>However the ratio on som3data differs too much (1TB):</div><div>Filesystem 1K-blocks Used Available Use% Mounted on<br></div><div><div>/dev/sdc1 52737613824 15479191748 37258422076 30% /mnt/disk_c/glusterfs/vol1 -> stor3data</div><div>/dev/sdd1 52737613824 15566398604 37171215220 30% /mnt/disk_d/glusterfs/vol1 -> stor3data</div></div><div><br></div><div>Thinking in inodes:</div><div><div><br></div><div>Filesystem Inodes IUsed IFree IUse% Mounted on</div><div>/dev/sdc1 5273970048 851053 5273118995 1% /mnt/glusterfs/vol1 -> stor1data<br></div><div>/dev/sdc1 5273970048 849388 5273120660 1% /mnt/glusterfs/vol1 -> stor2data<br></div></div><div><div><br></div><div>/dev/sdc1 5273970048 846877 5273123171 1% /mnt/disk_c/glusterfs/vol1 -> stor3data</div><div>/dev/sdd1 5273970048 845250 5273124798 1% /mnt/disk_d/glusterfs/vol1 -> stor3data</div></div><div><br></div><div>851053 (stor1) - 845250 (stor3) = 5803 files of difference !</div><div><br></div><div>In adition, correct me if I'm wrong, stor3data should have 50% of probability to store a new file (even taking into account the algorithm of DHT with filename patterns)</div><div><br></div><div>Thanks,</div><div>Greetings.</div><div><br></div><div>Jose V.</div><div><br></div><div><div>Status of volume: volumedisk0</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick stor1data:/mnt/glusterfs/vol0/bri</div><div>ck1 49152 0 Y 13533</div><div>Brick stor2data:/mnt/glusterfs/vol0/bri</div><div>ck1 49152 0 Y 13302</div><div>Brick stor3data:/mnt/disk_b1/glusterfs/</div><div>vol0/brick1 49152 0 Y 17371</div><div>Brick stor3data:/mnt/disk_b2/glusterfs/</div><div>vol0/brick1 49153 0 Y 17391</div><div>NFS Server on localhost N/A N/A N N/A </div><div>NFS Server on stor3data N/A N/A N N/A </div><div>NFS Server on stor2data N/A N/A N N/A </div><div> </div><div>Task Status of Volume volumedisk0</div><div>------------------------------------------------------------------------------</div><div>Task : Rebalance </div><div>ID : 7f5328cb-ed25-4627-9196-fb3e29e0e4ca</div><div>Status : completed </div><div> </div><div>Status of volume: volumedisk1</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick stor1data:/mnt/glusterfs/vol1/bri</div><div>ck1 49153 0 Y 13579</div><div>Brick stor2data:/mnt/glusterfs/vol1/bri</div><div>ck1 49153 0 Y 13344</div><div>Brick stor3data:/mnt/disk_c/glusterfs/v</div><div>ol1/brick1 49154 0 Y 17439</div><div>Brick stor3data:/mnt/disk_d/glusterfs/v</div><div>ol1/brick1 49155 0 Y 17459</div><div>NFS Server on localhost N/A N/A N N/A </div><div>NFS Server on stor3data N/A N/A N N/A </div><div>NFS Server on stor2data N/A N/A N N/A </div><div> </div><div>Task Status of Volume volumedisk1</div><div>------------------------------------------------------------------------------</div><div>Task : Rebalance </div><div>ID : d0048704-beeb-4a6a-ae94-7e7916423fd3</div><div>Status : completed </div></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-02-28 15:40 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On 28 February 2018 at 18:28, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Nithya,<div><br></div><div>I applied the workarround for this bug and now df shows the right size:</div><div><span><div><br></div></span></div></div></blockquote></span><div>That is good to hear.</div><div><div class="gmail-h5"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><span><div></div><div>[root@stor1 ~]# df -h</div><div>Filesystem Size Used Avail Use% Mounted on</div></span><div>/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1</div><div>stor1data:/volumedisk0</div><div> 101T 3,3T 97T 4% /volumedisk0</div><div>stor1data:/volumedisk1</div><div> 197T 61T 136T 31% /volumedisk1</div></div><div><br></div><div><div><br></div><div>[root@stor2 ~]# df -h</div><span><div>Filesystem Size Used Avail Use% Mounted on</div></span><div>/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1</div><div>stor2data:/volumedisk0</div><div> 101T 3,3T 97T 4% /volumedisk0</div><div>stor2data:/volumedisk1</div><div> 197T 61T 136T 31% /volumedisk1</div></div><div><br></div><div><br></div><div><div>[root@stor3 ~]# df -h</div><span><div>Filesystem Size Used Avail Use% Mounted on</div></span><div>/dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0<br></div><div>/dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0</div><div>/dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1</div><div>/dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1</div><div>stor3data:/volumedisk0</div><div> 101T 3,3T 97T 4% /volumedisk0</div><div>stor3data:/volumedisk1</div><div> 197T 61T 136T 31% /volumedisk1</div></div><div><br></div><div><br></div><div>However I'm concerned because, as you can see, the volumedisk0 on stor3data is composed by 2 bricks on thesame disk but on different partitions (/dev/sdb1 and /dev/sdb2).</div><div>After to aplly the workarround, the shared-brick-count parameter was setted to 1 in all the bricks and all the servers (see below). Could be this an issue ?</div><div><br></div></div></blockquote></div></div><div>No, this is correct. The shared-brick-count will be > 1 only if multiple bricks share the same partition.</div><span class="gmail-"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Also, I can check that stor3data is now unbalanced respect stor1data and stor2data. The three nodes have the same size of brick but stor3data bricks have used 1TB less than stor1data and stor2data:</div></div></blockquote><div><br></div><div><br></div></span><div>This does not necessarily indicate a problem. The distribution need not be exactly equal and depends on the filenames. Can you provide more information on the kind of dataset (how many files, sizes etc) on this volume? Did you create the volume with all 4 bricks or add some later?</div><div><br></div><div>Regards,</div><div>Nithya</div><div><div class="gmail-h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div><div><div>stor1data:<br></div><div><div>/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1</div></div><div><br></div><div><div>stor2data bricks:</div></div><div><div>/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1</div></div><div><br></div><div>stor3data bricks:</div></div><div><div> /dev/sdb1 25T 638G 24T 3% /mnt/disk_b1/glusterfs/vol0<br></div><div> /dev/sdb2 25T 654G 24T 3% /mnt/disk_b2/glusterfs/vol0</div></div><div> dev/sdc1 50T 15T 35T 30% /mnt/disk_c/glusterfs/vol1<br></div><div> /dev/sdd1 50T 15T 35T 30% /mnt/disk_d/glusterfs/vol1</div></div><div><span><div><br></div><div><br></div><div>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 0</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor3t ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1<br></div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: option shared-brick-count 1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: option shared-brick-count 0</div></span></div><div><br></div><div class="gmail_extra">Thaks for your help,</div><div class="gmail_extra">Greetings.</div><span class="gmail-m_5857708467870841856HOEnZb"><font color="#888888"><div class="gmail_extra"><br></div><div class="gmail_extra">Jose V.</div></font></span><div><div class="gmail-m_5857708467870841856h5"><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-02-28 5:07 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<div><br></div><div>There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this.</div><div><br></div><div>The "<span style="font-size:12.8px">shared-brick-count" values seem fine on </span><span style="font-size:12.8px">stor1. Please send us "</span><span style="font-size:12.8px">grep -n "share" /var/lib/glusterd/vols/</span><span style="font-size:12.8px">volumed<wbr>isk1/*" results </span><span style="font-size:12.8px">for the other nodes so we can check if they are the cause.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Regards,</span></div><div><span style="font-size:12.8px">Nithya</span></div><div><br></div><div><br></div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1517260" target="_blank">https://bugzilla.redhat.co<wbr>m/show_bug.cgi?id=1517260</a></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="gmail-m_5857708467870841856m_3174327891829646280h5">On 28 February 2018 at 03:03, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-m_5857708467870841856m_3174327891829646280h5"><div dir="ltr"><p>Hi, <br>
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd
daemons are running, there is no error in logs, however df shows a bad
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1 detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick : Brick stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port : 49153 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 13579 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.0TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273123069 <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick : Brick stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port : 49153 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 13344 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.0TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273124718 <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick : Brick stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
TCP Port : 49154 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 17439 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.7TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273125437 <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick : Brick stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
TCP Port : 49155 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 17459 <br>
File System : xfs <br>
Device : /dev/sdd1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.6TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273127036 <br>
</p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = <b>196,4 TB </b>but df shows:<br>
</p>
<p>[root@stor1 ~]# df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/sda2 48G 21G 25G 46% /<br>
tmpfs 32G 80K 32G 1% /dev/shm<br>
/dev/sda1 190M 62M 119M 35% /boot<br>
/dev/sda4 395G 251G 124G 68% /data<br>
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0<br>
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
76T 1,6T 74T 3% /volumedisk0<br>
stor1data:/volumedisk1<br>
<b>148T</b> 42T 106T 29% /volumedisk1</p>
<p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It's a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
<br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf8<wbr>98da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/gluster<wbr>fs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
<br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-6b6d81<wbr>c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3: <wbr> option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3: option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3: <wbr> option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:
option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:
option shared-brick-count 0<br>
</p>
<p><br>
</p></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div>