[Gluster-users] Add more capacity to existing gulsterfs replicate

Olaf Buitelaar olaf.buitelaar at gmail.com
Thu Dec 24 22:45:46 UTC 2020


Hi,

please see;
https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/

Gluster offers online expansion of the volume you can add bricks and/or
nodes without taking mariadb offline if you want.

just use the; gluster volume add-brick [vol] [bricks] (bricks must be added
according your replication count, in your case 2)
or use; gluster volume replace-brick [vol] [brick] to replace a single brick

Best Olaf

Op do 24 dec. 2020 om 23:35 schreef Bambang Sumitra <
bambang.sumitra at gmail.com>:

> Hi,
>
> I have small datawarehouse using mariadb columnstore setup as 2 instance
> server with glusterfs as storage backend, i follow setup from this guide
> https://mariadb.com/kb/en/installing-and-configuring-a-multi-server-columnstore-system-11x
>
> Now our server almost running out of space, i have attached new disk to
> both server and plan to add more storage capacity to the glusterfs volume
> and remove old brick (if possible)
>
> Questions :
> Can i do this step :
> 1. stop mariadb
> 2. stop glusterfs
> 3. mount new disk to /mnt/newdisk
> 4. copy data from old brick to to /mnt/newdisk
> 5. unmount brick
> 6. mount new disk to /usr/local/mariadb/columnstore/gluster (existing
> glusterfs mount)
>
> or is there any easy and better way to add capacity? i dont mine to keep
> or remove old brick
>
> Thank you,
>
> *command output from host 10.1.1.60*
> root at mDWDB01:~# gluster volume status
> Status of volume: dbroot1
> Gluster process                             TCP Port  RDMA Port  Online
>  Pid
>
> ------------------------------------------------------------------------------
> Brick 10.1.1.60:/usr/local/mariadb/columnst
> ore/gluster/brick1                          49152     0          Y
> 1541
> Brick 10.1.1.61:/usr/local/mariadb/columnst
> ore/gluster/brick1                          49152     0          Y
> 1499
> Self-heal Daemon on localhost               N/A       N/A        Y
> 1425
> Self-heal Daemon on 10.1.1.61               N/A       N/A        Y
> 1367
>
> Task Status of Volume dbroot1
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> Status of volume: dbroot2
> Gluster process                             TCP Port  RDMA Port  Online
>  Pid
>
> ------------------------------------------------------------------------------
> Brick 10.1.1.60:/usr/local/mariadb/columnst
> ore/gluster/brick2                          49153     0          Y
> 1550
> Brick 10.1.1.61:/usr/local/mariadb/columnst
> ore/gluster/brick2                          49153     0          Y
> 1508
> Self-heal Daemon on localhost               N/A       N/A        Y
> 1425
> Self-heal Daemon on 10.1.1.61               N/A       N/A        Y
> 1367
>
> Task Status of Volume dbroot2
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> mis at mDWDB01:~$ sudo gluster volume info
> [sudo] password for mis:
>
> Volume Name: dbroot1
> Type: Replicate
> Volume ID: 22814201-3fae-4904-b0b7-d6e1716365ec
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1
> Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
>
> Volume Name: dbroot2
> Type: Replicate
> Volume ID: 6443b073-754d-440b-89e9-49c085114f46
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2
> Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
>
> mis at mDWDB01:~$ mount |grep column
> /dev/sdb1 on /usr/local/mariadb/columnstore/gluster type xfs
> (rw,relatime,attr2,inode64,noquota)
> 10.1.1.60:/dbroot2 on /usr/local/mariadb/columnstore/data2 type
> fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
> *command output from host 10.1.1.61*
> mis at mDWUM01:~$ sudo gluster volume info
> [sudo] password for mis:
>
> Volume Name: dbroot1
> Type: Replicate
> Volume ID: 22814201-3fae-4904-b0b7-d6e1716365ec
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1
> Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
>
> Volume Name: dbroot2
> Type: Replicate
> Volume ID: 6443b073-754d-440b-89e9-49c085114f46
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2
> Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
>
>
> mis at mDWUM01:~$ sudo gluster volume status
> Status of volume: dbroot1
> Gluster process                             TCP Port  RDMA Port  Online
>  Pid
>
> ------------------------------------------------------------------------------
> Brick 10.1.1.60:/usr/local/mariadb/columnst
> ore/gluster/brick1                          49152     0          Y
> 1541
> Brick 10.1.1.61:/usr/local/mariadb/columnst
> ore/gluster/brick1                          49152     0          Y
> 1499
> Self-heal Daemon on localhost               N/A       N/A        Y
> 1367
> Self-heal Daemon on mDWDB01                 N/A       N/A        Y
> 1425
>
> Task Status of Volume dbroot1
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> Status of volume: dbroot2
> Gluster process                             TCP Port  RDMA Port  Online
>  Pid
>
> ------------------------------------------------------------------------------
> Brick 10.1.1.60:/usr/local/mariadb/columnst
> ore/gluster/brick2                          49153     0          Y
> 1550
> Brick 10.1.1.61:/usr/local/mariadb/columnst
> ore/gluster/brick2                          49153     0          Y
> 1508
> Self-heal Daemon on localhost               N/A       N/A        Y
> 1367
> Self-heal Daemon on mDWDB01                 N/A       N/A        Y
> 1425
>
> Task Status of Volume dbroot2
>
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> mis at mDWUM01:~$ mount |grep column
> /dev/sdb1 on /usr/local/mariadb/columnstore/gluster type xfs
> (rw,relatime,attr2,inode64,noquota)
> 10.1.1.61:/dbroot1 on /usr/local/mariadb/columnstore/data1 type
> fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> mis at mDWUM01:~$
>
>
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201224/e670fb57/attachment.html>


More information about the Gluster-users mailing list