[Gluster-users] Add more capacity to existing gulsterfs replicate

Strahil Nikolov hunter86_bg at yahoo.com
Fri Dec 25 17:40:18 UTC 2020


Hi Bambang,

> /dev/sdc1 /usr/local/mariadb/columnstore/gluster2 xfs defaults 1 2 (
> add to /etc/fstab)
Add 'noatime' or 'relatime' in order to reduce additional I/O that is
used to update access time for the files on the bricks themselves.
Also, you can use 'inode64' as a mount option.

> command for dbroot1 volume
> gluster volume add-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick3
> gluster volume add-brick dbroot1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick3
You need to add the bricks in replica set (which means 'replica 2' -> 2
bricks at a time).
So it should be like:
gluster volume add-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick3
10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick3

> gluster volume status dbroot1 <to check status>
> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 start
> gluster volume remove-brick dbroot1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 start
Removal of bricks is also done in replica sets (which means 'replica 2'
-> 2 bricks at a time).
So it should be:
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 start

> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 status  <to
> check status>
> gluster volume remove-brick dbroot1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 status  <to
> check status>
Status should be checked with the same command used for start:
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 status

> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
Same for commit:
gluster volume remove-brick dbroot1
10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 commit
> 

Repeat for dbroot2 just like for dbroot1.

> 
> The above commands need to execute on one server only? for example in
> 10.1.1.60, or need to execute on both server (10.1.1.60 and
> 10.1.1.61)?
Yes, any node in the cluster.

P.S.: If you which , you can leave the old disks and just do a
rebalance :)

Happy hollidays to all!

Best Regards,
Strahil Nikolov



More information about the Gluster-users mailing list