[Gluster-users] Migrating bricks from a 2 drive raid 0 to a single HD

Louis Marascio marascio at gmail.com
Tue Jul 21 19:21:09 UTC 2015


Just to follow-up in case anyone else goes through this procedure, I
was successful in performing the migration based on the steps above.
After doing the hard drive swap I am currently running a rebalance
after running fix-layout.

Louis

---
Louis R. Marascio
512-964-4569


On Fri, Jul 17, 2015 at 1:21 PM, Louis Marascio <marascio at gmail.com> wrote:
> I currently have a 30 TB distributed-replicate gluster volume with 10 bricks
> and a replica count of 2 (so 5x2 configuration). The cluster is made up of 5
> nodes, and each node has 4 3.5" HD slots. Each node has 2 bricks and each
> brick is comprised of 2 3 TB hard drives in a raid 0 configuration. As you
> can see, each brick is 6 TB.
>
> The volume is getting quite full and I need to stretch the hardware as long as
> possible, so my plan is to migrate away from bricks build around 2 3 TB drives
> in raid 0 to a single 6 TB drive. By doing this I will free up 2 HD slots per
> node and can then add new bricks doubling the capacity of the volume to 60 TB.
>
> To the crux of the matter: I'm looking for any advice or thoughts on doing
> this migration. After some discussion on IRC shyam and I came to the
> conclussion that the following procedure was a good option (downtime is
> ok):
>
>   1.  Run a heal operation on the entire volume.
>
>   2.  Shutdown all cluster nodes.
>
>   3.  Pull sda and sdb (slot1 and slot2) from the node to upgrade. In my
>       config sd[ab] = md1 = brick1. Due to how my replication is setup I will
>       be doing the hard drive swap starting with brick2.
>
>   4.  Insert a new 6 TB disk into slot1 (where sda was).
>
>   5.  Boot the node from a livecd or equivalent. We should have a 6 TB drive
>       as sda and brick2 should be md2 (comprising sdc and sdd).
>
>   6.  Clone /dev/md2 onto /dev/sda. I will do this with either dd or cat.
>       Suffice it to say I will create a bit-perfect copy of /dev/md2 onto
>       /dev/sda thus cloning all of brick2's data.
>
>   7.  Zero out the mdadm super block on /dev/sda (I believe the superblock is
>       going to be picked up as part of the clone and we don't need it
>       anymore).
>
>   8.  My fstab uses labels for mount points so I will ensure the label of the
>       new 6 TB sda drive is the same as the old md1 (just cloned, so it SHOULD
>       be right).
>
>   9.  Shutdown the node.
>
>   10. Pull sdc and sdb (slot3 and slot4). Move newly cloned 6 TB drive from
>       slot1 to slot3 (where sdc was). Replace original sda and sdb into their
>       original slots (remember, these are for brick1 and not being modified at
>       all at this point). I should now have brick1 back, brick2 should exist
>       on a single 6 TB hard drive, and there should be a single free slot for
>       our new brick, brick3, to be added later after ensuring we have migrated
>       brick2 successfuly.
>
>   11. Boot the nodes. Gluster should see no change as the 6 TB drive is
>       mounted at the same spot as the previous md1 brick was.
>
> At this point adding the new 6 TB brick into the newly freed slot is
> straightforward enough. Since I have a replica 2 setup I will obviously do the
> above steps on two nodes at a time so that the new 6 TB brick has somewhere to
> replicate to.
>
> Can anyone see any problems with this approach? Is there a better way?
>
> Thanks (especially to shyam for helping out on IRC)!
>
> Louis
>
> note: the bricks are actually 5.4TB each and the volume is actually 27 TB as
> a small bit of each drive is carved out for the boot partition. I omit these
> numbers above to keep things simple and the numbers round.
>
> ---
> Louis R. Marascio
> 512-964-4569


More information about the Gluster-users mailing list