[Bugs] [Bug 1375125] arbiter volume write performance is bad.

bugzilla at redhat.com bugzilla at redhat.com
Wed Oct 26 02:52:52 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1375125

humaorong <maorong.hu at horebdata.cn> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |maorong.hu at horebdata.cn
              Flags|                            |needinfo?



--- Comment #7 from humaorong <maorong.hu at horebdata.cn> ---
 Hi, I installed glusterfs nightly build rpm(2016-10-25),which link from 
:http://artifacts.ci.centos.org/gluster/nightly/release-3.8/7/x86_64/?C=M;O=D
 . and create a replicate 3 arbiter 1 volume and enable features.shard (set it
enabe or on ),info as :

[root at horeba ~]# gluster --version
glusterfs 3.8.5 built on Oct 25 2016 02:09:23

[root at horeba ~]# gluster volume info data_volume3

Volume Name: data_volume3
Type: Distributed-Replicate
Volume ID: cd5f4322-11e3-4f18-a39d-f0349b8d2a0c
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.10.71:/data_sdaa/brick
Brick2: 192.168.10.72:/data_sdaa/brick
Brick3: 192.168.10.73:/data_sdaa/brick (arbiter)
Brick4: 192.168.10.71:/data_sdc/brick
Brick5: 192.168.10.73:/data_sdc/brick
Brick6: 192.168.10.72:/data_sdc/brick (arbiter)
Brick7: 192.168.10.72:/data_sde/brick
Brick8: 192.168.10.73:/data_sde/brick
Brick9: 192.168.10.71:/data_sde/brick (arbiter)
Brick10: 192.168.10.71:/data_sde/brick1
Brick11: 192.168.10.72:/data_sdc/brick1
Brick12: 192.168.10.73:/data_sdaa/brick1 (arbiter)
Options Reconfigured:
server.allow-insecure: on
features.shard: enable
features.shard-block-size: 512MB
storage.owner-gid: 36
storage.owner-uid: 36
nfs.disable: on
cluster.data-self-heal-algorithm: full
auth.allow: *
network.ping-timeout: 10
performance.low-prio-threads: 32
performance.io-thread-count: 32
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on

glusterfs mount one host , add dd test it ,reselst are :
[root at horebb test6]# for i in `seq 3`; do dd if=/dev/zero of=./file   bs=1G
count=1 oflag=direct ; done
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 55.9329 s, 19.2 MB/s
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 54.8481 s, 19.6 MB/s
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 57.9079 s, 18.5 MB/s

and disable features.shard config and test it  :
[root at horeba ~]# gluster volume reset data_volume3 features.shard
volume reset: success: reset volume successful

[root at horebb test6]# for i in `seq 3`; do dd if=/dev/zero of=./filetest   bs=1G
count=1 oflag=direct ; done
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.25607 s, 855 MB/s
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.18359 s, 907 MB/s
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.29374 s, 830 MB/s

and I also download master source code from 2016-10-25  (git clone
https://github.com/gluster/glusterfs ) and builded rpm , install builded
glusterfs rpm , the test result is the same as nightly build result . 

so ,enable glusterfs volume shard config  performance is bad  problem  also
exist ,please see  how to resolve it , as we known ,shard config is important
for glusterfs usage .

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=HRKGD2DEOn&a=cc_unsubscribe


More information about the Bugs mailing list