[Gluster-users] GlusterFS replica synchronization problem
Łukasz Zygmański
vins at umk.pl
Tue Jun 16 13:56:02 UTC 2015
Hello,
Could you please tell me what should I do to enable/fix synchronous
replication between two GlusterFS nodes, because at that time my files
are synching about every 7 mins.
Here is my configuration:
Gluster01 server = 10.75.3.43 (and also 10.75.2.41 for clients)
Gluster02 server = 10.75.3.44 (and also 10.75.2.42 for clients)
[root at gluster01 vins]# gluster volume status
Status of volume: testapi_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.int:/glusterfs/testapi/bric
k 49153 0 Y 2693
Brick gluster02.int:/glusterfs/testapi/bric
k 49153 0 Y 5214
NFS Server on localhost 2049 0 Y 3388
Self-heal Daemon on localhost N/A N/A Y 3396
NFS Server on gluster02.int 2049 0 Y 6468
Self-heal Daemon on gluster02.int N/A N/A Y 6476
Task Status of Volume testapi_vol
------------------------------------------------------------------------------
There are no active volume tasks
[root at gluster02 vins]# gluster volume status
Status of volume: testapi_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.int:/glusterfs/testapi/bric
k 49153 0 Y 2693
Brick gluster02.int:/glusterfs/testapi/bric
k 49153 0 Y 5214
NFS Server on localhost 2049 0 Y 6468
Self-heal Daemon on localhost N/A N/A Y 6476
NFS Server on 10.75.3.43 2049 0 Y 3388
Self-heal Daemon on 10.75.3.43 N/A N/A Y 3396
Task Status of Volume testapi_vol
------------------------------------------------------------------------------
There are no active volume tasks
[root at gluster01 vins]# gluster volume info
Volume Name: testapi_vol
Type: Replicate
Volume ID: 7800a682-1f07-4464-bf7c-e1aba11f5190
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.int:/glusterfs/testapi/brick
Brick2: gluster02.int:/glusterfs/testapi/brick
[root at gluster02 vins]# gluster volume info
Volume Name: testapi_vol
Type: Replicate
Volume ID: 7800a682-1f07-4464-bf7c-e1aba11f5190
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01.int:/glusterfs/testapi/brick
Brick2: gluster02.int:/glusterfs/testapi/brick
[root at gluster01 vins]# ip a
2: eno16780032: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
state UP qlen 1000
link/ether 00:50:56:b2:11:15 brd ff:ff:ff:ff:ff:ff
inet 10.75.2.41/24 brd 10.75.2.255 scope global eno16780032
valid_lft forever preferred_lft forever
3: eno33559296: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
state UP qlen 1000
link/ether 00:50:56:b2:7f:d4 brd ff:ff:ff:ff:ff:ff
inet 10.75.3.43/24 brd 10.75.3.255 scope global eno33559296
valid_lft forever preferred_lft forever
[root at gluster02 vins]# ip a
2: eno16780032: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
state UP qlen 1000
link/ether 00:50:56:b2:4b:5c brd ff:ff:ff:ff:ff:ff
inet 10.75.2.42/24 brd 10.75.2.255 scope global eno16780032
valid_lft forever preferred_lft forever
3: eno33559296: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
state UP qlen 1000
link/ether 00:50:56:b2:6c:8e brd ff:ff:ff:ff:ff:ff
inet 10.75.3.44/24 brd 10.75.3.255 scope global eno33559296
valid_lft forever preferred_lft forever
[root at gluster01 vins]# iptables -nxvL | grep 3.44
10 600 ACCEPT all -- * * 10.75.3.44
0.0.0.0/0
[root at gluster02 vins]# iptables -nxvL | grep 3.43
101 7398 ACCEPT all -- * * 10.75.3.43
0.0.0.0/0
[root at gluster01 vins]# cat /etc/hosts | grep 10.75.3
10.75.3.43 gluster01.int gluster01.int.uci.umk.pl
10.75.3.44 gluster02.int gluster02.int.uci.umk.pl
[root at gluster02 vins]# cat /etc/hosts | grep 10.75.3
10.75.3.43 gluster01.int gluster01.int.uci.umk.pl
10.75.3.44 gluster02.int gluster02.int.uci.umk.pl
Mount on the client:
gluster01.int:/testapi_vol on /mnt/glusterfs type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
# cat /etc/fstab | grep gluster
gluster01.int:/testapi_vol /mnt/glusterfs glusterfs
defaults,_netdev 0 0
My test (from client):
# dd if=/dev/zero of=test bs=1024 count=10240
and I see the file on gluster01, but I have to wait the aforementioned
about 7 minutes for the files to appear on gluster02.
Could you please tell me what can I do to fix that, where should I look,
is it possible? If there is more info needed please do let me know.
Best regards
Lukasz
--
Łukasz Zygmański
Uczelniane Centrum Information & Communication
Informatyczne Technology Centre
Uniwersytet Mikolaja Kopernika Nicolaus Copernicus University
Coll. Maximum, pl. Rapackiego 1, 87-100 Torun, Poland
tel.: +48 56 611 27 36 fax: +48 56-622-18-50
email: Lukasz.Zygmanski at umk.pl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5279 bytes
Desc: Kryptograficzna sygnatura S/MIME
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150616/127b35ae/attachment.p7s>
More information about the Gluster-users
mailing list