[Gluster-users] Add more capacity to existing gulsterfs replicate

Bambang Sumitra bambang.sumitra at gmail.com
Sat Dec 26 13:54:37 UTC 2020


Hi Strahil,

Finally i can add new brick and removed the old one, now i have enough
storage capacity :)

Thank you for your direction Strahil, Happy holiday!



On Sat, Dec 26, 2020, 00:40 Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

> Hi Bambang,
>
> > /dev/sdc1 /usr/local/mariadb/columnstore/gluster2 xfs defaults 1 2 (
> > add to /etc/fstab)
> Add 'noatime' or 'relatime' in order to reduce additional I/O that is
> used to update access time for the files on the bricks themselves.
> Also, you can use 'inode64' as a mount option.
>
> > command for dbroot1 volume
> > gluster volume add-brick dbroot1
> > 10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick3
> > gluster volume add-brick dbroot1
> > 10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick3
> You need to add the bricks in replica set (which means 'replica 2' -> 2
> bricks at a time).
> So it should be like:
> gluster volume add-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster2/brick3
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster2/brick3
>
> > gluster volume status dbroot1 <to check status>
> > gluster volume remove-brick dbroot1
> > 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 start
> > gluster volume remove-brick dbroot1
> > 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 start
> Removal of bricks is also done in replica sets (which means 'replica 2'
> -> 2 bricks at a time).
> So it should be:
> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 start
>
> > gluster volume remove-brick dbroot1
> > 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 status  <to
> > check status>
> > gluster volume remove-brick dbroot1
> > 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 status  <to
> > check status>
> Status should be checked with the same command used for start:
> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 status
>
> > gluster volume remove-brick dbroot1
> > 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
> > gluster volume remove-brick dbroot1
> > 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1 commit
> Same for commit:
> gluster volume remove-brick dbroot1
> 10.1.1.60:/usr/local/mariadb/columnstore/gluster1/brick1
> 10.1.1.61:/usr/local/mariadb/columnstore/gluster1/brick1 commit
> >
>
> Repeat for dbroot2 just like for dbroot1.
>
> >
> > The above commands need to execute on one server only? for example in
> > 10.1.1.60, or need to execute on both server (10.1.1.60 and
> > 10.1.1.61)?
> Yes, any node in the cluster.
>
> P.S.: If you which , you can leave the old disks and just do a
> rebalance :)
>
> Happy hollidays to all!
>
> Best Regards,
> Strahil Nikolov
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201226/dfa6f623/attachment.html>


More information about the Gluster-users mailing list