[Gluster-users] Expand distributed replicated volume
Lalatendu Mohanty
lmohanty at redhat.com
Mon May 5 13:26:06 UTC 2014
On 05/05/2014 03:08 PM, Hugues Lepesant wrote:
> RE: [Gluster-users] Expand distributed replicated volume
>
> Hi all,
>
> Does someone have already meet this ?
>
> Best regards,
>
> Hugues
>
I tried the similar steps on two Fedora20 VMs (each with 3 100GB
partition) + gluster 3.5.0, but did not hit this issue. Here are the
steps I performed.
1. Created a distribute replicate volume (2x2).
2. mounted the volume using native glusterfs mount. "df -h" shows 200GB
for the mount point.
3. performed add brick with 2 partitions. The volume info shows 3x2)
4. df -h now shows 300GB for the mount point.
Thanks,
Lala
>
> -----Message initial-----
> *De:* Hugues Lepesant <hugues at lepesant.com>
> *Envoyé:* mar. 29-04-2014 11:51
> *Sujet:* [Gluster-users] Expand distributed replicated volume
> *À:* gluster-users at gluster.org;
>
> Hi all,
>
> I want to create a distributed replicated volume.
> And I want to be able to expand this volume.
>
>
> To test this, I have 6 nodes each have a 10G disk to share.
>
> First I create the initial volume.
>
> # gluster volume create testvol replica 2 transport tcp
> node-01:/export/sdb1/brick node-02:/export/sdb1/brick
> node-03:/export/sdb1/brick node-04:/export/sdb1/brick
>
> On the client, I mount the volume :
>
> # mount -t glusterfs node-01:/testvol /storage-pool
> # df -h /storage-pool/
> Filesystem Size Used Avail Use% Mounted on
> node-01:/testvol 20G 3.9G 17G 20% /storage-pool
>
> The volume size is 20G, what I expect.
>
> Now I want to expand this volume, adding brick.
>
> To do so, I do on node-01:
> # gluster volume add-brick testvol node-05:/export/sdb1/brick
> node-06:/export/sdb1/brick
> # gluster volume rebalance testvol start
> # gluster volume rebalance testvol status
> # gluster volume info testvol
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: cd24ec0f-3503-4d67-9032-1db1d0987f9c
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: node-01:/export/sdb1/brick
> Brick2: node-02:/export/sdb1/brick
> Brick3: node-03:/export/sdb1/brick
> Brick4: node-04:/export/sdb1/brick
> Brick5: node-05:/export/sdb1/brick
> Brick6: node-06:/export/sdb1/brick
>
> The back on client :
>
> # df -h /storage-pool/
> Filesystem Size Used Avail Use% Mounted on
> node-01:/testvol 20G 2.9G 18G 15% /storage-pool
>
> Even If I umount/mount the volume, It's always 20G.
> Should it not display a size of 30GB ?
> Like if I create a distributed volume and the add-brick with
> replicate option
>
> # gluster volume create testvol transport tcp
> node-01:/export/sdb1/brick node-02:/export/sdb1/brick
> node-03:/export/sdb1/brick
> # gluster volume start testvol
> # gluster volume add-brick testvol replica 2
> node-04:/export/sdb1/brick node-05:/export/sdb1/brick
> node-06:/export/sdb1/brick
> # gluster volume info testvol
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: 31159749-85fb-4006-8240-25b74a7eb537
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: node-01:/export/sdb1/brick
> Brick2: node-04:/export/sdb1/brick
> Brick3: node-02:/export/sdb1/brick
> Brick4: node-05:/export/sdb1/brick
> Brick5: node-03:/export/sdb1/brick
> Brick6: node-06:/export/sdb1/brick
>
> On client :
> # df -h /storage-pool/
> Filesystem Size Used Avail Use% Mounted on
> node-01:/testvol 30G 97M 30G 1% /storage-pool
>
> I'm using glusterfs-server 3.5.0 on Ubuntu 14.04 Trusty.
>
> Any help is welcome.
> Best regards,
> Hugues
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140505/62aa8a96/attachment.html>
More information about the Gluster-users
mailing list