<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Pat,<br></div><div>I would like to see the final configuration of your gluster volume after you added bricks on new node.<br></div><div><br></div><div>You mentioned that -<br></div><div>"The new brick was a new server with with 12 of 24 disk bays filled (we couldn't afford to fill them all at the time). These 12 disks are managed in a hardware RAID-6."<br></div><div><br></div><div>If all the new bricks are on one new node then probably that is not a good situation to be in .<br></div><div><br></div><div>@Pascal, I agree with your suggestion..<br></div><div>"long story short.. i'd consider creating a second raid acorss your 12 new disks and adding this as a second brick to gluster storage.. that's what gluster's for after all .. to scale your storage :) in the case of raid 6 you will loose the capacity of two disks but you will gain alot in terms of redundancy and dataprotection."<br></div><div><br></div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Pascal Suter" <pascal.suter@dalco.ch><br><b>To: </b>gluster-users@gluster.org<br><b>Sent: </b>Friday, April 26, 2019 11:52:22 AM<br><b>Subject: </b>Re: [Gluster-users] Expanding brick size in glusterfs 3.7.11<br><div><br></div><p>I may add to that that i have expanded linux filesystems (xfs and ext4) both via LVM and some by adding disks to a hardware raid. from the OS point of view it does not make a difference, the procedure once the block device on which the filesytem resides is expanded is prettymuch the same and so far always worked like a charm. <br></p><p>one word of caution though: i've just recently had a case with a raid 6 across 12 disks (1TB, a 5 year old RAID array) where during a planned power outage a disk failed, when turnging the storage back on, a second failed right after that and the third failed during rebuild. luckily this was a retired server used for backup only, so no harm done.. but this just shows us, that under the "ritght" circumstances, multi disk failures are possible. the more disks you have in your raidset the higher the chance of a disk failure.. by doubling the amount of disks in your raidset you double the chance of a disk failure and therefore a double or tripple disk failure as well. <br></p><p>long story short.. i'd consider creating a second raid acorss your 12 new disks and adding this as a second brick to gluster storage.. that's what gluster's for after all .. to scale your storage :) in the case of raid 6 you will loose the capacity of two disks but you will gain alot in terms of redundancy and dataprotection. <br></p><p>also you will not have the performance impact of the raid expansion.. this is usually a rather long process which will eat a lot of your performance while it's ongoing. <br></p><p>of course, if you have mirrored bricks, that's a different story, but i assume you don't. <br></p><div class="moz-cite-prefix">cheers</div><div class="moz-cite-prefix">Pascal <br></div><div class="moz-cite-prefix"><br></div><div class="moz-cite-prefix">On 26.04.19 05:35, Jim Kinney wrote:<br></div><blockquote cite="mid:FDD348D5-8B24-44A0-9F68-290DB061087E@gmail.com">I've expanded bricks using lvm and there was no problems at all with gluster seeing the change. The expansion was performed basically simultaneously on both existing bricks of a replica. I would expect the raid expansion to behave similarly.<br> <br><div class="gmail_quote">On April 25, 2019 9:05:45 PM EDT, Pat Haley <a class="moz-txt-link-rfc2396E" href="mailto:phaley@mit.edu" target="_blank" data-mce-href="mailto:phaley@mit.edu"><phaley@mit.edu></a>wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;" data-mce-style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid #cccccc; padding-left: 1ex;"><pre class="k9mail">Hi,
Last summer we added a new brick to our gluster volume (running
glusterfs 3.7.11). The new brick was a new server with with 12 of 24
disk bays filled (we couldn't afford to fill them all at the time).
These 12 disks are managed in a hardware RAID-6. We have recently been
able to purchase another 12 disks. We would like to just add these new
disks to the existing hardware RAID and thus expand the size of the
brick. If we can successfully add them to the hardware RAID like this,
will gluster have any problems with the expanded brick size?
--</pre><hr><pre class="k9mail">Pat Haley Email: <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu" target="_blank" data-mce-href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 <a href="http://web.mit.edu/phaley/www/" target="_blank" data-mce-href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA 02139-4301</pre><hr><pre class="k9mail">Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" data-mce-href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" data-mce-href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br data-mce-bogus="1"></pre></blockquote></div><br> -- <br> Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. <br><fieldset class="mimeAttachmentHeader"></fieldset><pre class="moz-quote-pre">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" data-mce-href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" data-mce-href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br data-mce-bogus="1"></pre></blockquote><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>