[Bugs] [Bug 1439731] Sharding: Fix a performance bug

bugzilla at redhat.com bugzilla at redhat.com
Thu May 11 12:28:27 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1439731

RamaKasturi <knarra at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED



--- Comment #9 from RamaKasturi <knarra at redhat.com> ---
Verified and works fine with  build glusterfs-3.8.4-18.1.el7rhgs.x86_64.

Tested this with 3 vms, each of them hosted on a single hypervisor and below is
the process which i followed for testing this.

1) Install HC with three hypervisors.
2) Create 1 vm on each of them.
3) Install fio on the vms.
4) Ran 80:20 workload on the vms initially, profiled the fuse mount and did not
save the values as writes will also be involved in this.
5) Ran rand read 100 percent workload and profiled the fuse mount from all the
three hyper visors.

1st iteration:
========================
Total look ups sent over the fuse mount  : 785
Total looks up sent over brick are : 1131

2nd iteration:
======================
Total look up sent over the fuse mount : 810
Total looks up sent over the brick are : 1111

3rd iteration:
======================
Total look up sent over the fuse mount : 798
Total look up sent over the brick are : 1105

profile output from fuse and brick are present in the link below

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/output_lookups/

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=dLyF5JwYMm&a=cc_unsubscribe


More information about the Bugs mailing list