[Bugs] [Bug 1255110] New: client is sending io to arbiter with replica 2
bugzilla at redhat.com
bugzilla at redhat.com
Wed Aug 19 16:34:34 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1255110
Bug ID: 1255110
Summary: client is sending io to arbiter with replica 2
Product: GlusterFS
Version: 3.7.3
Component: replicate
Severity: urgent
Assignee: bugs at gluster.org
Reporter: sdainard at spd1.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
Using a replica 2 + arbiter 1 configuration for ovirt storage domain. When
arbiter node is up client IO is decreased by ~%30. Monitoring bandwidth on
arbiter node shows significant rx network, in this case 50-60MB/s but the brick
local path on the arbiter node is not showing significant disk space usage (<
1MB).
Version-Release number of selected component (if applicable):
CentOS 6.7 / 7.1
Gluster 3.7.3
How reproducible:
Always. If Arbiter node is killed, client IO is higher.
Initially discovered using ovirt, but also easily reproduced by writing to
client fuse mount point.
Steps to Reproduce:
1. Create gluster volume replica 2 arbiter 1
2. Write data on client fuse mount point
3. Watch realtime network bandwidth on arbiter node
Actual results:
Client is sending IO writes to arbiter node, decreasing expected performance.
Expected results:
No heavy IO should be going to the arbiter node, as it has no reason to receive
data when it doesn't have any storage bricks. This considerably slows client IO
as it is writing to 3 nodes instead of two. I would assume this is the same
performance penalty as replica 3 would be vs replica 2.
Additional info:
During a disk migration from an NFS storage domain to a gluster storage domain,
the arbiter node interface shows 37GB of data received:
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.231.62 netmask 255.255.255.0 broadcast 10.0.231.255
inet6 fe80::5054:ff:fe61:a934 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:61:a9:34 txqueuelen 1000 (Ethernet)
RX packets 5874053 bytes 39820122925 (37.0 GiB)
RX errors 0 dropped 650 overruns 0 frame 0
TX packets 4793230 bytes 4387154708 (4.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
BUT the arbiter node has a very little storage space available to it ('brick'
mount point is on /):
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda3 8.6G 1.2G 7.4G 14% /
devtmpfs 912M 0 912M 0% /dev
tmpfs 921M 0 921M 0% /dev/shm
tmpfs 921M 8.4M 912M 1% /run
tmpfs 921M 0 921M 0% /sys/fs/cgroup
/dev/vda1 497M 157M 341M 32% /boot
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list