[Gluster-users] Brick and Subvolume Info
Ashish Pandey
aspandey at redhat.com
Wed Nov 22 02:25:24 UTC 2017
comments inline
----- Original Message -----
From: "Gino Lisignoli" <glisignoli at gmail.com>
To: gluster-users at gluster.org
Sent: Wednesday, November 22, 2017 3:49:02 AM
Subject: [Gluster-users] Brick and Subvolume Info
Hello
I have a Distributed-Replicate volume and I would like to know if it is possible to see what sub-volume a brick belongs to, eg:
A Distributed-Replicate volume containing:
Number of Bricks: 2 x 2 = 4
Brick1: node1.localdomain:/mnt/data1/brick1
Brick2: node2.localdomain:/mnt/data1/brick1
Brick3: node1.localdomain:/mnt/data2/brick2
Brick4: node2.localdomain:/mnt/data2/brick2
Is it possible to list the bricks in sub-volumes, showing which bricks are mirrors of each other? My assumption is that the order the bricks are listed determines what sub-volumes they are in.
>>>No, we don't have this utility yet. However, your assumption is correct. In your case, Brick1 and Brick2 belong to subvolume one and Brick3 and Brick4 to second sub volume.
Is it also possible to get the status of bricks in their sub-volume to determine their health? Eg, are the bricks in a state where I can offline one of the nodes, replace faulty drives and then online it again?
>>> You can use "gluster v status <volname>" and can get the status of each brick, if they are UP or not.
When you talk about "health", you should also consider "gluster v heal <volname >" info . to see if there is anything pending to heal or not.
To your next question, I would rather talk about bricks. You can kill/offline a brick and replace it without any issue. If you are not doing any IO then it should be easy job.
If you are offline a NODE and 2 bricks of any one sub volume are placed on that node then that subvolume will not be active/used. At this point if you are creating/writing a file and that hashes to that sub volume,
you might see IO error on mount point. If you NOT doing any IO that should be fine.
If you want to remove 2 bricks of a same subvolume, I would suggest to go via remove bricks path.
- First remove both the bricks of a subvolume using "gluster volume remove-bricks" command.
- Add 2 new bricks and run rebalance.
----
Ashish
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171121/e4710d33/attachment.html>
More information about the Gluster-users
mailing list