[Gluster-users] Two brick distributed not redistributing

William Muriithi william.muriithi at gmail.com
Thu Dec 20 22:49:07 UTC 2012


Are these two services supposed to be installed on the server? What is
the purpose of each service?

[root at gfs1 ~]# service  --status-all | grep gluster
glusterd (pid  3704) is running...
glusterfsd (pid 3901) is running...
[root at gfs1 ~]#

Wonder if any of you have come across this and can advice.

I have two brick setup I am trying to setup and evaluate if it will
work for us. I have noticed that glusterfs clients seem to be using
one brick consistently and  this don't seem to be consistent with
documentation.  Is this the expected behaviour  or do I have a
misconfiguration somewhere in my installation.

>From the clients log, I am certain all the clients are seeing both
bricks. As they can mount the sum of the two storage brick capacity.
I can pull out one brick and the clients fail over to the other brick.
 They do seem to revert to the former brick when I connect it back to
the network.  If this the appropriate working behaviour? Wouldn't it
be more faster to alternate the file creating between the bricks?  Is
there anything I can do from the iozone to alternate between bricks?

[root at uranus williamm]# rpm -qa | grep gluster

[root at gfs1 ~]# rpm -qa | grep gluster

[root at gfs1 ~]# gluster  volume status
Status of volume: example
Gluster process                                         Port    Online  Pid
Brick gfs2.example.com:/storage                        24009   Y       10381
Brick gfs1.example.com:/storage                        24009   Y       3901
NFS Server on localhost                                 38467   Y       3907
NFS Server on gfs2.example.com                         38467   Y       10387

[root at gfs1 ~]# gluster  volume info

Volume Name: example
Type: Distribute
Volume ID: fcd31ea6-45e2-4c1f-bfa9-1bdb82f573d1
Status: Started
Number of Bricks: 2
Transport-type: tcp
Brick1: gfs2.example.com:/storage
Brick2: gfs1.example.com:/storage
[root at gfs1 ~]#

Thanks in advance



More information about the Gluster-users mailing list