[Gluster-devel] High CPU consumption on Windows guest OS when libgfapi is used

qingwei wei tchengwee at gmail.com
Tue May 24 06:08:01 UTC 2016


Hi Vijay,

Do you have time to look into this issue yet.

Cw

On Tue, May 3, 2016 at 5:55 PM, qingwei wei <tchengwee at gmail.com> wrote:

> Hi Vijay,
>
> I finally manage to do this test on the shared volume.
>
> gluster volume info
>
> Volume Name: abctest
> Type: Distributed-Replicate
> Volume ID: 0db494e2-51a3-4521-a1ba-5d3479cecba2
> Status: Started
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: abc11:/data/hdd1/abctest
> Brick2: abc12:/data/hdd1/abctest
> Brick3: abc14:/data/hdd1/abctest
> Brick4: abc16:/data/hdd1/abctest
> Brick5: abc17:/data/hdd1/abctest
> Brick6: abc20:/data/hdd1/abctest
> Brick7: abc22:/data/hdd1/abctest
> Brick8: abc23:/data/hdd1/abctest
> Brick9: abc24:/data/hdd1/abctest
> Options Reconfigured:
> features.shard-block-size: 16MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 165
> storage.owner-gid: 165
> nfs.disable: true
> performance.quick-read: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.stat-prefetch: off
> cluster.lookup-optimize: on
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> transport.address-family: inet
> performance.readdir-ahead: off
>
> Result is still the same.
>
> 4k random write
>
> IOPS 5355.75
> Avg. response time (ms) 2.79
> CPU utilization total (%) 96.73
> CPU Privilegde time (%) 92.49
> 4k random read
>
> 4k random read
> IOPS 16718.93
> Avg. response time (ms) 0.9
> CPU utilization total (%) 79.2
> CPU Privilegde time (%) 75.43
>
> The snapshot of top -H while running 4k random write
>
>   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
> 23850 qemu      20   0 10.294g 8.203g  11788 R 92.7  6.5   5:42.01 qemu-kvm
> 24116 qemu      20   0 10.294g 8.203g  11788 S 34.2  6.5   1:27.91 qemu-kvm
> 26948 qemu      20   0 10.294g 8.203g  11788 R 33.6  6.5   1:26.85 qemu-kvm
> 24115 qemu      20   0 10.294g 8.203g  11788 S 32.9  6.5   1:27.72 qemu-kvm
> 26937 qemu      20   0 10.294g 8.203g  11788 S 32.9  6.5   1:27.87 qemu-kvm
> 27050 qemu      20   0 10.294g 8.203g  11788 R 32.9  6.5   1:17.14 qemu-kvm
> 27033 qemu      20   0 10.294g 8.203g  11788 S 31.6  6.5   1:19.40 qemu-kvm
> 24119 qemu      20   0 10.294g 8.203g  11788 S 26.6  6.5   1:32.16 qemu-kvm
> 24120 qemu      20   0 10.294g 8.203g  11788 S 25.9  6.5   1:32.02 qemu-kvm
> 23880 qemu      20   0 10.294g 8.203g  11788 S  8.3  6.5   2:31.11 qemu-kvm
> 23881 qemu      20   0 10.294g 8.203g  11788 S  8.0  6.5   2:58.75 qemu-kvm
> 23878 qemu      20   0 10.294g 8.203g  11788 S  7.6  6.5   2:04.15 qemu-kvm
> 23879 qemu      20   0 10.294g 8.203g  11788 S  7.6  6.5   2:36.50 qemu-kvm
>
> Thanks.
>
> Cw
>
> On Thu, Apr 21, 2016 at 10:12 PM, Vijay Bellur <vbellur at redhat.com> wrote:
>
>> On Wed, Apr 20, 2016 at 4:17 AM, qingwei wei <tchengwee at gmail.com> wrote:
>> > Gluster volume configuration, those bold entries are the initial
>> settings i
>> > have
>> >
>> > Volume Name: g37test
>> > Type: Stripe
>> > Volume ID: 3f9dae3d-08f9-4321-aeac-67f44c7eb1ac
>> > Status: Created
>> > Number of Bricks: 1 x 10 = 10
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: 192.168.123.4:/mnt/sdb_mssd/data
>> > Brick2: 192.168.123.4:/mnt/sdc_mssd/data
>> > Brick3: 192.168.123.4:/mnt/sdd_mssd/data
>> > Brick4: 192.168.123.4:/mnt/sde_mssd/data
>> > Brick5: 192.168.123.4:/mnt/sdf_mssd/data
>> > Brick6: 192.168.123.4:/mnt/sdg_mssd/data
>> > Brick7: 192.168.123.4:/mnt/sdh_mssd/data
>> > Brick8: 192.168.123.4:/mnt/sdj_mssd/data
>> > Brick9: 192.168.123.4:/mnt/sdm_mssd/data
>> > Brick10: 192.168.123.4:/mnt/sdn_mssd/data
>> > Options Reconfigured:
>> > server.allow-insecure: on
>> > storage.owner-uid: 165
>> > storage.owner-gid: 165
>> > performance.quick-read: off
>> > performance.io-cache: off
>> > performance.read-ahead: off
>> > performance.stat-prefetch: off
>> > cluster.eager-lock: enable
>> > network.remote-dio: enable
>> > cluster.quorum-type: auto
>> > cluster.server-quorum-type: server
>> > nfs.disable: true
>> >
>>
>> I notice that you are using a stripe volume. Would it be possible to
>> test with a sharded volume? We will be focusing only on sharded
>> volumes for VM disks going forward.
>>
>> Thanks,
>> Vijay
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160524/53a5d1a1/attachment.html>


More information about the Gluster-devel mailing list