[Gluster-users] remove old gluster storage domain and resize remaining gluster storage domain
Bill James
bill.james at j2.com
Thu Dec 1 11:11:25 UTC 2016
glusterfs-3.7.11-1.el7.x86_64
I have a 3 node ovirt cluster with replica 3 gluster volume.
But for some reason the volume is not using the full size available.
I thought maybe it was because I had created a second gluster volume on
same partition, so I tried to remove it.
I was able to put it in maintenance mode and detach it, but in no window
was I able to get the "remove" option to be enabled.
Now if I select "attach data" I see ovirt thinks the volume is still
there, although it is not.
2 questions.
1. how do I clear out the old removed volume from ovirt?
2. how do I get gluster to use the full disk space available?
Its a 1T partition but it only created a 225G gluster volume. Why? How
do I get the space back?
All three nodes look the same:
/dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store
ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60%
/rhev/data-center/mnt/glusterSD/ovirt1-gl.j2noc.com:_gv1
[root at ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 5218
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 5678
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 61386
NFS Server on localhost 2049 0 Y 31312
Self-heal Daemon on localhost N/A N/A Y 31320
NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119
NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161201/cdf5b577/attachment.html>
More information about the Gluster-users
mailing list