[Gluster-users] Add more capacity to existing gulsterfs replicate
Bambang Sumitra
bambang.sumitra at gmail.com
Fri Dec 25 14:46:50 UTC 2020
Hi Strahil and Olaf,
Thank you for your suggestion, i have read the manual send by Olaf,
and mounted additional disk on both server with this command
command sequence executed on 10.1.1.60 and 10.1.1.61:
fdisk /dev/sdc
mkfs.xfs -i size=512 /dev/sdc1
mkdir /usr/local/mariadb/columnstore/gluster2
/dev/sdc1 /usr/local/mariadb/columnstore/gluster2 xfs defaults 1 2 (
add to /etc/fstab)
mkdir /usr/local/mariadb/columnstore/gluster2/brick3
mkdir /usr/local/mariadb/columnstore/gluster2/brick4
now i have created brick3 and brick4 on mount point
/usr/local/mariadb/columnstore/gluster2
my brick configuration is a bit different, new disk mounted on
/usr/local/mariadb/columnstore/gluster2 and created two directory for
brick, brick3 and brick4. hope this configuration is fine
because this is production server, i just want to make sure the
command i am going to execute is correct
plan to execute this command as suggested in previous email :
command for dbroot1 volume
gluster volume add-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick3
gluster volume add-brick dbroot1
10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick3
gluster volume status dbroot1 <to check status>
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 start
gluster volume remove-brick dbroot1
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 start
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 status <to
check status>
gluster volume remove-brick dbroot1
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 status <to
check status>
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
command for dbroot2 volume
gluster volume add-brick dbroot2
10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick4
gluster volume add-brick dbroot2
10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick4
gluster volume status dbroot2 <to check status>
gluster volume remove-brick dbroot2
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick2 start
gluster volume remove-brick dbroot2
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick2 start
gluster volume remove-brick dbroot2
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick2 status <to
check status>
gluster volume remove-brick dbroot2
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick2 status <to
check status>
gluster volume remove-brick dbroot2
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick2 commit
gluster volume remove-brick dbroot2
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick2 commit
The above commands need to execute on one server only? for example in
10.1.1.60, or need to execute on both server (10.1.1.60 and
10.1.1.61)?
Thank you
Bambang Sumitra
On Fri, Dec 25, 2020 at 2:37 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>
> Hi Bambang,
>
> you don't need to bring the application down as long as you use bigger or same size disks for the new storage.
>
> Merry Christmas to all who celebrate the holiday!
>
> The most convinient would be to:
> 1. Prepare the new disks as per https://docs.gluster.org/en/latest/Install-Guide/Configure/ (Partition the disk)
> Note: You have to do it on both nodes.If you don't use SELINUX, skip the "context" mount option. I'm usually mounting with:
>
> rw,noatime,context=system_u:object_r:glusterd_brick_t:s0
>
> WARNING: it is strongly advised to create a directory in the mountpoint that you will use as a brick!
>
> 2. Add the 2 new bricks (brick = mount point) to the volume:
> gluster volume add-brick dbroot1Gluster 10.1.1.60:</full/path/to/new/mountpoint> 10.1.1.61:</full/path/to/new/mountpoint>
>
> 3. Next verify the volue status
> gluster volume status dbroot1Gluster
>
> 4. Rebalance the volume (only if you plan to keep all disks in Gluster)
> gluster volume rebalance dbroot1Gluster start
>
> Note: that some of the data might remain on the old disks as we haven't used the "force" keyword
>
> You can repeat with "force" if you wish (once the old rebalance is over):
> gluster volume rebalance dbroot1Gluster status
> gluster volume rebalance dbroot1Gluster start force
>
> 5. Remove the old bricks (if you decide to do it , you can skip the rebalance)
> gluster volume remove-brick dbroot1Gluster 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1 start
>
> To get the status of the data migration use:
> gluster volume remove-brick dbroot1Gluster 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1 status
>
> Once it's all migrated , go to the bricks themselves (/usr/local/mariadb...) and verify that there are no files left
>
>
> gluster volume remove-brick dbroot1Gluster 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1 commit
>
> Note: If the volume was pure replica (without the second set of bricks) , you would get a warning that you need to use "force". In your case "commit" should be enough
>
>
>
> Verify status of the volume and voilà .
>
>
>
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
>
>
>
> В петък, 25 декември 2020 г., 00:46:04 Гринуич+2, Olaf Buitelaar <olaf.buitelaar at gmail.com> написа:
>
>
>
>
>
> Hi,
>
> please see; https://docs.gluster.org/en/latest/Administrator-Guide/Managing-Volumes/
>
> Gluster offers online expansion of the volume you can add bricks and/or nodes without taking mariadb offline if you want.
>
> just use the; gluster volume add-brick [vol] [bricks] (bricks must be added according your replication count, in your case 2)
> or use; gluster volume replace-brick [vol] [brick] to replace a single brick
>
> Best Olaf
>
> Op do 24 dec. 2020 om 23:35 schreef Bambang Sumitra <bambang.sumitra at gmail.com>:
> > Hi,
> >
> > I have small datawarehouse using mariadb columnstore setup as 2 instance server with glusterfs as storage backend, i follow setup from this guide https://mariadb.com/kb/en/installing-and-configuring-a-multi-server-columnstore-system-11x
> >
> > Now our server almost running out of space, i have attached new disk to both server and plan to add more storage capacity to the glusterfs volume and remove old brick (if possible)
> >
> > Questions :
> > Can i do this step :
> > 1. stop mariadb
> > 2. stop glusterfs
> > 3. mount new disk to /mnt/newdisk
> > 4. copy data from old brick to to /mnt/newdisk
> > 5. unmount brick
> > 6. mount new disk to /usr/local/mariadb/columnstore/gluster (existing glusterfs mount)
> >
> > or is there any easy and better way to add capacity? i dont mine to keep or remove old brick
> >
> > Thank you,
> >
> > command output from host 10.1.1.60
> > root at mDWDB01:~# gluster volume statusStatus of volume: dbroot1Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1 49152 0 Y 1541Brick 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1 49152 0 Y 1499Self-heal Daemon on localhost N/A N/A Y 1425Self-heal Daemon on 10.1.1.61 N/A N/A Y 1367Task Status of Volume dbroot1------------------------------------------------------------------------------There are no active volume tasksStatus of volume: dbroot2Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2 49153 0 Y 1550Brick 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2 49153 0 Y 1508Self-heal Daemon on localhost N/A N/A Y 1425Self-heal Daemon on 10.1.1.61 N/A N/A Y 1367Task Status of Volume dbroot2------------------------------------------------------------------------------There are no active volume tasksmis at mDWDB01:~$ sudo gluster volume info[sudo] password for mis:Volume Name: dbroot1Type: ReplicateVolume ID: 22814201-3fae-4904-b0b7-d6e1716365ecStatus: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inetVolume Name: dbroot2Type: ReplicateVolume ID: 6443b073-754d-440b-89e9-49c085114f46Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inetmis at mDWDB01:~$ mount |grep column/dev/sdb1 on /usr/local/mariadb/columnstore/gluster type xfs (rw,relatime,attr2,inode64,noquota)10.1.1.60:/dbroot2 on /usr/local/mariadb/columnstore/data2 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >
> >
> > command output from host 10.1.1.61
> > mis at mDWUM01:~$ sudo gluster volume info[sudo] password for mis:Volume Name: dbroot1Type: ReplicateVolume ID: 22814201-3fae-4904-b0b7-d6e1716365ecStatus: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inetVolume Name: dbroot2Type: ReplicateVolume ID: 6443b073-754d-440b-89e9-49c085114f46Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2Brick2: 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inetmis at mDWUM01:~$ sudo gluster volume statusStatus of volume: dbroot1Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick1 49152 0 Y 1541Brick 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick1 49152 0 Y 1499Self-heal Daemon on localhost N/A N/A Y 1367Self-heal Daemon on mDWDB01 N/A N/A Y 1425Task Status of Volume dbroot1------------------------------------------------------------------------------There are no active volume tasksStatus of volume: dbroot2Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.1.1.60:/usr/local/mariadb/columnstore/gluster/brick2 49153 0 Y 1550Brick 10.1.1.61:/usr/local/mariadb/columnstore/gluster/brick2 49153 0 Y 1508Self-heal Daemon on localhost N/A N/A Y 1367Self-heal Daemon on mDWDB01 N/A N/A Y 1425Task Status of Volume dbroot2------------------------------------------------------------------------------There are no active volume tasksmis at mDWUM01:~$ mount |grep column/dev/sdb1 on /usr/local/mariadb/columnstore/gluster type xfs (rw,relatime,attr2,inode64,noquota)10.1.1.61:/dbroot1 on /usr/local/mariadb/columnstore/data1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)mis at mDWUM01:~$
> >
> >
> >
> >
> >
> > ________
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list