[Gluster-users] [ovirt-users] GlusterFS performance with only one drive per host?

Manoj Pillai mpillai at redhat.com
Sat Mar 24 17:56:14 UTC 2018


My take is that unless you have loads of data and are trying to optimize
for cost/TB, HDDs are probably not the right choice. This is particularly
true for random I/O workloads for which HDDs are really quite bad.

I'd recommend a recent gluster release, and some tuning because the default
settings are not optimized for performance. Some options to consider:
client.event-threads
server.event-threads
cluster.choose-local
performance.client-io-threads

You can toggle the last two and see what works for you. You'd probably need
to set event-threads to 4 or more. Ideally you'd tune some of the thread
pools based on observed bottlenecks in collected stats. top (top -bHd 10 >
top_threads.out.txt) is great for this. Using 6 small drives/bricks instead
of 3 is also a good idea to reduce likelihood of rpc bottlenecks.

There has been an effort to improve gluster performance over fast SSDs.
Hence the recommendation to try with a recent release. You can also check
in on some of the issues being worked on:
https://github.com/gluster/glusterfs/issues/412
https://github.com/gluster/glusterfs/issues/410

-- Manoj

On Sat, Mar 24, 2018 at 4:14 AM, Jayme <jaymef at gmail.com> wrote:

> Do you feel that SSDs are worth the extra cost or am I better off using
> regular HDDs?  I'm looking for the best performance I can get with glusterFS
>
> On Fri, Mar 23, 2018 at 12:03 AM, Manoj Pillai <mpillai at redhat.com> wrote:
>
>>
>>
>> On Thu, Mar 22, 2018 at 3:31 PM, Sahina Bose <sabose at redhat.com> wrote:
>>
>>>
>>>
>>> On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jaymef at gmail.com> wrote:
>>>
>>>> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB.  I'm
>>>> considering storage options.  I don't have a requirement for high amounts
>>>> of storage, I have a little over 1TB to store but want some overhead so I'm
>>>> thinking 2TB of usable space would be sufficient.
>>>>
>>>> I've been doing some research on Micron 1100 2TB ssd's and they seem to
>>>> offer a lot of value for the money.  I'm considering using smaller cheaper
>>>> SSDs for boot drives and using one 2TB micron SSD in each host for a
>>>> glusterFS replica 3 setup (on the fence about using an arbiter, I like the
>>>> extra redundancy replicate 3 will give me).
>>>>
>>>> My question is, would I see a performance hit using only one drive in
>>>> each host with glusterFS or should I try to add more physical disks.  Such
>>>> as 6 1TB drives instead of 3 2TB drives?
>>>>
>>>
>> It is possible. With SSDs the rpc layer can become the bottleneck with
>> some workloads, especially if there are not enough connections out to the
>> server side. We had experimented with a multi-connection model for this
>> reason:  https://review.gluster.org/#/c/19133/.
>>
>> -- Manoj
>>
>>>
>>> [Adding gluster-users for inputs here]
>>>
>>>
>>>> Also one other question.  I've read that gluster can only be done in
>>>> groups of three.  Meaning you need 3, 6, or 9 hosts.  Is this true?  If I
>>>> had an operational replicate 3 glusterFS setup and wanted to add more
>>>> capacity I would have to add 3 more hosts, or is it possible for me to add
>>>> a 4th host in to the mix for extra processing power down the road?
>>>>
>>>
>>> In oVirt, we support replica 3 or replica 3 with arbiter (where one of
>>> the 3 bricks is a low storage arbiter brick). To expand storage, you would
>>> need to add in multiples of 3 bricks. However if you only want to expand
>>> compute capacity in your HC environment, you can add a 4th node.
>>>
>>>
>>>> Thanks!
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180324/7c9789e7/attachment.html>


More information about the Gluster-users mailing list