[Gluster-users] Expanding a replicated volume
Sjors Gielen
sjors at sjorsgielen.nl
Thu Jul 2 12:25:15 UTC 2015
Hi all,
I'm doing a test setup where I have a 1-brick volume setup that I want to
"online expand" into a 2-brick replicated volume setup. It's pretty hard to
find information on this online; usually, only distributed volume setups
are discussed.
I'm using two machines for testing: mallorca and hawaii. They have been
added into each other's trusted pools.
First, I create a 1-brick volume on Mallorca:
# mkdir -p /local/glustertest/stor1 /stor1
# gluster volume create stor1 mallorca:/local/glustertest/stor1
# gluster volume start stor1
# mount -t glusterfs mallorca:/stor1 /stor1
# echo "This is file A" >/stor1/fileA.txt
# echo "This is file B" >/stor1/fileB.txt
# dd if=/dev/zero of=/stor1/largefile.img bs=1M count=100
So now /stor1 and /local/glustertest/stor1 contain these files.
Then, I expand the volume to a 2-brick replicated volume setup:
# mkdir -p /local/glustertest/stor1 /stor1
# gluster volume add-brick stor1 replica 2 hawaii:/local/glustertest/stor1
# mount -t glusterfs hawaii:/stor1 /stor1
At this point, /local/glustertest/stor1 is still filled on mallorca, and
empty on hawaii (except for .glusterfs). Here is the actual question: how
do I sync the contents of the two?
I tried:
* 'volume sync', but it's only for syncing of volume info between peers,
not volume contents
* 'volume rebalance', but it's only for Distribute volumes
* 'volume heal stor1 full', which finishes succesfully but didn't move
anything
* 'volume replace-brick', which I found online used to move some contents,
but now only supports switching the actual brick pointer with 'commit force'
* listing actual file names on Hawaii.
The last one is the only one that had some effect: after listing
/stor1/fileA.txt on Hawaii, the file appeared in /stor1 and
/local/glustertest/stor1. The other files are still missing. So, a
potential fix could be to get a list of all filenames from Mallorca and
`ls` them all so they are synced. But this seems like a silly solution.
There's two other viable solutions I could come up with:
* Stopping the volume, rsyncing the contents, adding the brick, starting
the volume. But that's offline (and it feels wrong to be poking around
inside Gluster's brick directory).
* Removing the volume, moving the brick directory, recreating the volume
with 2 replicas, and moving the old contents of the brick directory back
onto the new mount.
At some point, suddenly all files did appear on Hawaii, probably because of
the self-heal daemon. Is there some way to trigger the daemon to walk over
all files? Why didn't the explicit full self-heal do this?
Thanks,
Sjors
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150702/ad99796e/attachment.html>
More information about the Gluster-users
mailing list