[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend
bugzilla at redhat.com
bugzilla at redhat.com
Tue Jan 2 05:28:17 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #57 from Mohit Agrawal <moagrawa at redhat.com> ---
Hi,
After use splice call(for github
https://github.com/gluster/glusterfs/issues/372) on server side(only for write)
I got some performance improvement in writing of, though it is initial level of
testing but I hope we can get some improvement in IOPS if we use splice in both
sides (client and server and for both fops read and write).
Without splice patch
for i in `(seq 1 5)`
do
rm -rf /mnt/rpmbuild/glusterfs-4.0dev1/*
echo 3 > /proc/sys/vm/drop_caches
time cp -rf rpmbuild_build/glusterfs-4.0dev1/* /mnt/rpmbuild/glusterfs-4.0dev1/
done
real 22m51.189s
user 0m0.086s
sys 0m1.511s
real 22m57.531s
user 0m0.090s
sys 0m2.025s
real 22m57.845s
user 0m0.115s
sys 0m1.834s
real 22m51.257s
user 0m0.113s
sys 0m1.966s
real 22m57.857s
user 0m0.113s
sys 0m1.966s
# After applying splice patch
for i in `(seq 1 5)`; do rm -rf /mnt/rpmbuild/glusterfs-4.0dev1/*; echo 3 >
/proc/sys/vm/drop_caches; time cp -rf rpmbuild_build/glusterfs-4.0dev1/*
/mnt/rpmbuild/glusterfs-4.0dev1/; done
real 17m51.073s
user 0m0.104s
sys 0m1.862s
real 17m50.057s
user 0m0.097s
sys 0m2.005s
real 17m50.022s
user 0m0.096s
sys 0m1.928s
real 17m49.073s
user 0m0.101s
sys 0m1.828s
real 17m47.594s
user 0m0.077s
sys 0m1.753s
Regards
Mohit Agrawal
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=uvZI1NSCKR&a=cc_unsubscribe
More information about the Bugs
mailing list