[Gluster-users] Expand distributed replicated volume
Hugues Lepesant
hugues at lepesant.com
Tue Apr 29 09:51:48 UTC 2014
Hi all,
I want to create a distributed replicated volume.
And I want to be able to expand this volume.
To test this, I have 6 nodes each have a 10G disk to share.
First I create the initial volume.
# gluster volume create testvol replica 2 transport tcp node-01:/export/sdb1/brick node-02:/export/sdb1/brick node-03:/export/sdb1/brick node-04:/export/sdb1/brick
On the client, I mount the volume :
# mount -t glusterfs node-01:/testvol /storage-pool
# df -h /storage-pool/
Filesystem Size Used Avail Use% Mounted on
node-01:/testvol 20G 3.9G 17G 20% /storage-pool
The volume size is 20G, what I expect.
Now I want to expand this volume, adding brick.
To do so, I do on node-01:
# gluster volume add-brick testvol node-05:/export/sdb1/brick node-06:/export/sdb1/brick
# gluster volume rebalance testvol start
# gluster volume rebalance testvol status
# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: cd24ec0f-3503-4d67-9032-1db1d0987f9c
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: node-01:/export/sdb1/brick
Brick2: node-02:/export/sdb1/brick
Brick3: node-03:/export/sdb1/brick
Brick4: node-04:/export/sdb1/brick
Brick5: node-05:/export/sdb1/brick
Brick6: node-06:/export/sdb1/brick
The back on client :
# df -h /storage-pool/
Filesystem Size Used Avail Use% Mounted on
node-01:/testvol 20G 2.9G 18G 15% /storage-pool
Even If I umount/mount the volume, It's always 20G.
Should it not display a size of 30GB ?
Like if I create a distributed volume and the add-brick with replicate option
# gluster volume create testvol transport tcp node-01:/export/sdb1/brick node-02:/export/sdb1/brick node-03:/export/sdb1/brick
# gluster volume start testvol
# gluster volume add-brick testvol replica 2 node-04:/export/sdb1/brick node-05:/export/sdb1/brick node-06:/export/sdb1/brick
# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 31159749-85fb-4006-8240-25b74a7eb537
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: node-01:/export/sdb1/brick
Brick2: node-04:/export/sdb1/brick
Brick3: node-02:/export/sdb1/brick
Brick4: node-05:/export/sdb1/brick
Brick5: node-03:/export/sdb1/brick
Brick6: node-06:/export/sdb1/brick
On client :
# df -h /storage-pool/
Filesystem Size Used Avail Use% Mounted on
node-01:/testvol 30G 97M 30G 1% /storage-pool
I'm using glusterfs-server 3.5.0 on Ubuntu 14.04 Trusty.
Any help is welcome.
Best regards,
Hugues
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140429/c0a2d071/attachment.html>
More information about the Gluster-users
mailing list