[Gluster-users] Expanding brick size in glusterfs 3.7.11

Ashish Pandey aspandey at redhat.com
Fri Apr 26 06:44:07 UTC 2019


Pat, 
I would like to see the final configuration of your gluster volume after you added bricks on new node. 

You mentioned that - 
"The new brick was a new server with with 12 of 24 disk bays filled (we couldn't afford to fill them all at the time). These 12 disks are managed in a hardware RAID-6." 

If all the new bricks are on one new node then probably that is not a good situation to be in . 

@Pascal, I agree with your suggestion.. 
"long story short.. i'd consider creating a second raid acorss your 12 new disks and adding this as a second brick to gluster storage.. that's what gluster's for after all .. to scale your storage :) in the case of raid 6 you will loose the capacity of two disks but you will gain alot in terms of redundancy and dataprotection." 


--- 
Ashish 

----- Original Message -----

From: "Pascal Suter" <pascal.suter at dalco.ch> 
To: gluster-users at gluster.org 
Sent: Friday, April 26, 2019 11:52:22 AM 
Subject: Re: [Gluster-users] Expanding brick size in glusterfs 3.7.11 



I may add to that that i have expanded linux filesystems (xfs and ext4) both via LVM and some by adding disks to a hardware raid. from the OS point of view it does not make a difference, the procedure once the block device on which the filesytem resides is expanded is prettymuch the same and so far always worked like a charm. 


one word of caution though: i've just recently had a case with a raid 6 across 12 disks (1TB, a 5 year old RAID array) where during a planned power outage a disk failed, when turnging the storage back on, a second failed right after that and the third failed during rebuild. luckily this was a retired server used for backup only, so no harm done.. but this just shows us, that under the "ritght" circumstances, multi disk failures are possible. the more disks you have in your raidset the higher the chance of a disk failure.. by doubling the amount of disks in your raidset you double the chance of a disk failure and therefore a double or tripple disk failure as well. 


long story short.. i'd consider creating a second raid acorss your 12 new disks and adding this as a second brick to gluster storage.. that's what gluster's for after all .. to scale your storage :) in the case of raid 6 you will loose the capacity of two disks but you will gain alot in terms of redundancy and dataprotection. 


also you will not have the performance impact of the raid expansion.. this is usually a rather long process which will eat a lot of your performance while it's ongoing. 


of course, if you have mirrored bricks, that's a different story, but i assume you don't. 
cheers 
Pascal 

On 26.04.19 05:35, Jim Kinney wrote: 


I've expanded bricks using lvm and there was no problems at all with gluster seeing the change. The expansion was performed basically simultaneously on both existing bricks of a replica. I would expect the raid expansion to behave similarly. 

On April 25, 2019 9:05:45 PM EDT, Pat Haley <phaley at mit.edu> wrote: 
<blockquote>

Hi,

Last summer we added a new brick to our gluster volume (running 
glusterfs 3.7.11).  The new brick was a new server with with 12 of 24 
disk bays filled (we couldn't afford to fill them all at the time).  
These 12 disks are managed in a hardware RAID-6.  We have recently been 
able to purchase another 12 disks.  We would like to just add these new 
disks to the existing hardware RAID and thus expand the size of the 
brick.  If we can successfully add them to the hardware RAID like this, 
will gluster have any problems with the expanded brick size?

-- 

Pat Haley                          Email: phaley at mit.edu Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/ 77 Massachusetts Avenue
Cambridge, MA  02139-4301 

Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users 




-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. 
_______________________________________________
Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users 

</blockquote>

_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190426/b63670db/attachment.html>


More information about the Gluster-users mailing list