[Gluster-users] Performance degrade

Roland Rabben roland at jotta.no
Mon Jul 19 16:53:14 UTC 2010


I am not sure about the number of threads I should use. Your arguement
sounds logical and I should try that.

First of all I care about NOT loosing files. That's why I replicate files.
I am not familiar with LVM and how to use it. Is this a normal setup
for Gluster users? What are the pros and cons with LVM in a Glusterfs
setup?

Is it possible to create logical volumes from disks  already
containing data, or would they need to be formatted? They are
formatted EXT3 today.

Regards
Roland Rabben

2010/7/19 pkoelle <pkoelle at gmail.com>:
> Am 19.07.2010 17:10, schrieb Roland Rabben:
>>
>> I did try that on one of the clients. I removed all performance
>> translators except io-threads. No imporovement.
>> The server still use a hughe ammount of CPU.
>
> 36*8 = 288 threads alone for IO. I don't know specifics about GlusterFS but
> common knowledge suggests high thread counts are bad. You end up using all
> your CPU waiting for locks and in context switches.
>
> Why do you export each disk seperately? You don't seem to care about disk
> failure so you could put all disks in one LVM VG and export LVs from that.
>
> cheers
>  Paul
>
>>
>> Roland
>>
>> 2010/7/19 Andre Felipe Machado<andremachado at techforce.com.br>:
>>>
>>> Hello,
>>> Did you try to minimize or even NOT use any cache?
>>> With so many nodes, the cache coherency between them may had become an
>>> issue...
>>> Regards.
>>> Andre Felipe Machado
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: roland at jotta.no



More information about the Gluster-users mailing list