[Gluster-users] Geo-replication slaves are faulty after startup

Alexandr Porunov alexandr.porunov at gmail.com
Fri Nov 25 21:03:13 UTC 2016


Hello,

I want to create geo replication between two volumes. Volumes works just
fine. But geo-replication doesn't work at all.

My master volume nodes are:
192.168.0.120
192.168.0.121
192.168.0.122

My slave volume nodes are:
192.168.0.123
192.168.0.124
192.168.0.125

My OS is: CentOS 7
I am running GlusterFS 3.8.5

Here is the status of geo-replication session:
# gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 status

MASTER NODE      MASTER VOL    MASTER BRICK        SLAVE USER    SLAVE
                       SLAVE NODE       STATUS    CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.0.120    gv0           /data/brick1/gv0    geoaccount
 geoaccount at 192.168.0.123::gv0    192.168.0.123    Active    Changelog
Crawl    2016-11-25 22:25:12
192.168.0.121    gv0           /data/brick1/gv0    geoaccount
 geoaccount at 192.168.0.123::gv0    N/A              Faulty    N/A
     N/A
192.168.0.122    gv0           /data/brick1/gv0    geoaccount
 geoaccount at 192.168.0.123::gv0    N/A              Faulty    N/A
     N/A


I don't understand why it doesn't work. Here are interesting log files from
the master node (192.168.0.120):
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log -
http://paste.openstack.org/show/590503/

/var/log/glusterfs/mnt.log - http://paste.openstack.org/show/590504/

/var/log/glusterfs/run-gluster-shared_storage.log -
http://paste.openstack.org/show/590505/

/var/log/glusterfs/geo-replication/gv0/ssh%3A%2F%2Fgeoaccount%40192.168.0.123%3Agluster%3A%2F%2F127.0.0.1%3Agv0.log
- http://paste.openstack.org/show/590506/

Here is a log file from the slave node (192.168.0.123):
/var/log/glusterfs/geo-replication-slaves/5afe64e3-d4e9-452b-a9cf-10674e052616\:gluster%3A%2F%2F127.0.0.1%3Agv0.gluster.log
 - http://paste.openstack.org/show/590507/

Here is how I have created a session:
On slave nodes:
useradd geoaccount
groupadd geogroup
usermod -a -G geogroup geoaccount
usermod -a -G geogroup root
passwd geoaccount
mkdir -p /var/mountbroker-root
chown root:root -R /var/mountbroker-root
chmod 0711 -R /var/mountbroker-root
chown root:geogroup -R /var/lib/glusterd/geo-replication/*
chmod g=rwx,u=rwx,o-rwx -R /var/lib/glusterd/geo-replication/*

On the slave (192.168.0.123):
gluster system:: execute mountbroker opt mountbroker-root
/var/mountbroker-root
gluster system:: execute mountbroker opt geo-replication-log-group geogroup
gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
gluster system:: execute mountbroker user geoaccount gv0
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount gv0 gv0
gluster volume set all cluster.enable-shared-storage enable

Then I have restarted all the slaves:
systemctl restart glusterd

On the master node (192.168.0.120):
ssh-keygen
ssh-copy-id geoaccount at 192.168.0.123
gluster system:: execute gsec_create container
gluster volume set all cluster.enable-shared-storage enable
gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 create
ssh-port 22 push-pem
gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 config
remote-gsyncd /usr/libexec/glusterfs/gsyncd
gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 config
use-meta-volume true
gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 config
sync-jobs 3
gluster volume geo-replication gv0 geoaccount at 192.168.0.123::gv0 start

Does somebody know what is wrong with this installation? I tried to install
geo-replication for several times but without success.. Please help me

Sincerely,
Alexandr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161125/4c99eedc/attachment.html>


More information about the Gluster-users mailing list