[Gluster-users] re-use a brick from an old gluster volume to create a new one.
Strahil Nikolov
hunter86_bg at yahoo.com
Thu Jun 18 16:55:19 UTC 2020
In theory it could be possible to convert distributed to replicated volume (but not inplace).
I guess you can try the following on a test setup:
1. Create the distributed volume as per your setup
2. Create another volume of replica type
3. Fill in with data on the distributed volume
4. Setup a geo replication between the volumes
5. Once they are in sync , you can bring the firewall up to cut all clients (downtime is inevitable).
6. Stop the 2 volumes
7. Make a backup of /var/lib/glusterd/vols/
8. Then rename the volume dirs to the desired names , so you swap the new volume with the old one - clients won't need reconfiguration
9. Rename all files inside to reflect the new volume name
10. Use sed or vim to update the files with the new volume name
11. Restart glusterd on all nodes and start the volume
12. Verify that the replica volume has the name of the distributed volume
13. Bring the firewall down to allow access from the clients
Note: There are other approaches to rename a volume, but I think that this one is way more straightforward - rename volume dir , rename volume files and swap the old name of the volume in the files to reflect the new one.
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 19:22:46 GMT+03:00, Computerisms Corporation <bob at computerisms.ca> написа:
>Hi Gluster Gurus,
>
>Due to some hasty decisions and inadequate planning/testing, I find
>myself with a single-brick Distributed gluster volume. I had initially
>
>intended to extend it to a replicated setup with an arbiter based on a
>post I found that said that was possible, but I clearly messed up in
>the
>creation of the gluster as I have since come to understand that a
>distributed brick cannot be converted to replicated.
>
>The system is using gluster 5.4 from Debian Repos.
>
>So it seems I will have to delete the existing volume and create a new
>one, and I am now thinking it would be more future-proof to go with a
>2x2 distributed-replica anyway. Regardless, I am trying to find a path
>
>from old gluster volume to new gluster volume with a minimum of
>downtime. In the worst case scenario, I can wipe the existing gluster,
>
>make a new one and restore from backup. But I am hoping I can re-use
>the existing brick in a new gluster configuration and avoid that much
>downtime.
>
>So I synced the whole setup into a test environment, and thanks to a
>helpful post on this list I found this article:
>
>https://joejulian.name/post/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
>
>so I tried wiping the gluster and recreating, and removed the
>attributes
>and the .gluster directory from the brick, and it initially seems to
>work in my test environment, kinda. When I do the gluster create
>command and include the existing brick as the first one and leave it
>for
>a couple days, the replicated brick ends up with only about 80% of the
>data. tested a few times and that is pretty consistent.
>
>if I try with straight 2 replicated bricks, that never really changes
>after triggering multiple heals, and when I list files in gluster mount
>
>the file attributes such as owner/group/perms/data are replaced with
>question marks on a significant amount of files and those files are not
>
>ls'able except as part of a directory.
>
>If I try with the 2x2 setup, the replicated brick also has only about
>80% of the data initially, and after a few days of rebalancing df shows
>
>the two new distributed bricks to be almost exactly the same size, but
>the replica of the original/reused brick still end up being 5-7% less
>than the original, and the same symptoms of files not being accessible
>and having question marks for permissions/owner/data/etc persist.
>
>And this takes days, so definitely not faster than restoring from
>backup.
>
>I have been looking for other solutions, but if they exist so far I
>have
>not found them. Wondering if someone could provide some guidance or
>point me at a solution, or inform me if restoring from backup is really
>
>the best way forward?
More information about the Gluster-users
mailing list