[Gluster-users] mixing brick sizes and increasing replica counts

Matthew Nicholson matthew_nicholson at harvard.edu
Fri Jan 25 14:03:22 UTC 2013


2 questions I posed to ##gluster yesterday, wanted to see if anyone on
teh mailing list had thoughts:

#1: Mixing brick sizes in a distributed replicated volume:

We have a 5x2 dist-rep volume, with bricks that are all 28TB(1 per
node). Our building block has been Dell R515's w/ 12 3TB drives. Now
it sounds like we might actually be getting some of the same systems,
but w/ 4TB drives. So, instead of ~28TB usable, we would end up w/
~40TB usable.

Assuming we added these in pairs, for the replica count = 2,  to turn
this into a 6x2, or a 7x2, what would happen?

Best case:After a rebalance, a slightly higher % of files (from the
rebalance, as well as net new data), would end up on the larger brick
replica pairs

"meh" case: Gluster doesn't know/care about the size, and therefor
only the smallest brick size will even be used in full, meaning 12TB
per "big brick" would essentially never be used..

The "best case", doesn't seem that unreasonable to me, but I've got a
suspicion the gluster isn't really aware of the brick sizes so the
"meh" case seems more likely ...any official work on this? Perhaps a
feature request?


Question #2:

Same 5x2 dist-rep volume, if we got another 5 bricks (lets assume same
size), and wanted to turn this into a 5x3 volume, what would be
involved? I found this:
http://community.gluster.org/q/expand-a-replica-volume/
but still wasn't 100% clear. Could I just add the bricks and specify
the new replica count when i add them? What about deleting the volume,
and recreating and letting gluster heal/balance to the new "empty" set
of nodes?


Any insight, especially form experience, anyone has with either of
these will be a huge help!

Thanks!


--
Matthew Nicholson
matthew_nicholson at harvard.edu
Research Computing Specialist
FAS Research Computing
Harvard University



More information about the Gluster-users mailing list