[Gluster-users] SSD tier experimenting
Mohammed Rafi K C
rkavunga at redhat.com
Tue May 3 04:42:43 UTC 2016
comments are inline.
On 05/02/2016 09:42 PM, Dan Lambright wrote:
>
> ----- Original Message -----
>> From: "Sergei Hanus" <getallad at gmail.com>
>> To: "Mohammed Rafi K C" <rkavunga at redhat.com>
>> Cc: "Dan Lambright" <dlambrig at redhat.com>
>> Sent: Monday, May 2, 2016 9:40:22 AM
>> Subject: Re: [Gluster-users] SSD tier experimenting
>>
>> Mohammed, thank you once more for getting involved.
>> Workload is artificial - I use IOmeter to generate iops. I use 512b, 50%
>> random, 75% read profile.
>> I have created 100G gluster volume, created vm (using libvirt) and attached
>> volume as a separate disk to VM.
>> Dataset for testing is 10GB (vm ram size is 4GB).
>>
>> As I described, when I run test without tiering - I get around 3k iops.
>> When I attach tier and restart test with same parameters - I get latency
>> about seconds and 50-60 iops for pretty long time (5 mins, didn't run test
>> longer. Maybe, should I?).
So in case of tiering create workload, there is a performance drop. We
are planning to make a volume set command to that can be set if the
workload is small file performance oriented.
I would be interested to see the numbers on a plane tier volume. What I
mean is, create a plane volume, attach a tier, then only start i/o and
see the performance numbers.
>>
>> Requested log file is attached.
This is a regression caused in the latest 3.7 branch release. You can
track the status here [1]. The fix will be available in next 3.7
release. Though it shows the status as failed, functionally there is no
problem, as it just show the status as failed.
>>
>> As I described, I will be glad to work further on gluster in case you need
>> more experimenting.
>>
> The small file issue Rafi mentioned can be tracked with this patch
>
> http://review.gluster.org/#/c/13601/
>
> This patch will make it downstream, but that will take longer.
>
> Your drop in performance is worse than anything I've seen, though.. I'm not sure the patch could explain your behavior.
>
> If you create a new volume made up only of the same bricks /ssd/brick{1..3} in the hot tier and run the same workload, what iops do you see?
>
More information about the Gluster-users
mailing list