[Bugs] [Bug 1753843] [Disperse volume]: Regression in IO performance seen in sequential read for large file

bugzilla at redhat.com bugzilla at redhat.com
Fri Sep 20 04:14:58 UTC 2019


--- Comment #3 from Raghavendra G <rgowdapp at redhat.com> ---
I've checked the information provided by one of the runs
(sequential-read-and-reread-profile-fuse-large-file-test-run0.txt) and I've
found the reason (though not the cause) of the regression:

On 3.12, the first sequential read reads a total of 171.3 GiB based on iozone
data. Based on gluster profile info, the total amount of data read is 172.8
GiB, which matches quite well.

However on 6.0, iozone reports 177.9 GiB read, while profile info reports 266.4
GiB read. This means that almost 50% more data is read from bricks.

I will try to determine what is the cause of this, but right now I think there
are two possible candidates:

1. ec's read-policy is set to round-robin

   Apparently this is not the case as RHGS 3.5.0 has "gfid-hash" as default
value and it doesn't appear as modified in volume options

2. Some issue in read-ahead xlator or other xlators that could send reads
(maybe io-cache ?)


You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.

More information about the Bugs mailing list