[Gluster-users] Clarification on common tasks

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Thu Aug 11 09:13:34 UTC 2016

I would like to make some clarification on common tasks needed by
gluster administrators.

A) Let's assume a disk/brick is failed (or is going to fail) and I
would like to replace.
Which is the proper way to do so with no data loss or downtime ?

Looking on mailing list, seems to be the following:

1) kill the brick process (how can I ensure which is the brick process
to kill)? I have the following on a test cluster (with just one
# ps ax -o command | grep gluster
/usr/sbin/glusterfsd -s --volfile-id
gv0. -p
/var/lib/glusterd/vols/gv0/run/ -S
/var/run/gluster/27555a68c738d9841879991c725e92e0.socket --brick-name
/export/sdb1/brick -l /var/log/glusterfs/bricks/export-sdb1-brick.log
--brick-port 49152 --xlator-option gv0-server.listen-port=49152
/usr/sbin/glusterd -p /var/run/glusterd.pid
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S

which is the "brick process" ?

2) unmount the brick, in example:
unmount /dev/sdc

3) remove the failed disk

4) insert the new disk
5) create an XFS filesystem on the new disk
6) mount the new disk where the previous one was
7) add the new brick to the gluster. How ?
8) run "gluster v start force".

Why should I need the step 8? If the volume is already started and
working (remember that I would like to change disk with no downtime,
thus i can't stop the volume), why should I "start" it again ?

B) let's assume I would like to add a bounch of new bricks on existing
servers. Which is the proper procedure to do so?

Ceph has a good documentation page where some common tasks are explained:
i've not found anything similiar in gluster.

More information about the Gluster-users mailing list