[Gluster-users] Performance degrade
pkoelle at gmail.com
Mon Jul 19 17:07:42 UTC 2010
Am 19.07.2010 18:53, schrieb Roland Rabben:
> I am not sure about the number of threads I should use. Your arguement
> sounds logical and I should try that.
A cheap way to test the theory would be to lower io-threads to 2 or 3
> First of all I care about NOT loosing files. That's why I replicate files.
Then I suggest you provide some redundancy at the block level
(SW-RAID?). Doing a full resync just because one disk failed seems risky
> I am not familiar with LVM and how to use it. Is this a normal setup
> for Gluster users? What are the pros and cons with LVM in a Glusterfs
GlusterFS shouldn't care as it operates on the filesystem level, LVM
logical volumes (think partitions) are block devices. LVM allows you to
group your disk into volumes and create partitions out of them (no
reboot needed). We haven't noticed any performance overhead.
> Is it possible to create logical volumes from disks already
> containing data, or would they need to be formatted? They are
> formatted EXT3 today.
No, LVM has its own partition type.
> Roland Rabben
> 2010/7/19 pkoelle<pkoelle at gmail.com>:
>> Am 19.07.2010 17:10, schrieb Roland Rabben:
>>> I did try that on one of the clients. I removed all performance
>>> translators except io-threads. No imporovement.
>>> The server still use a hughe ammount of CPU.
>> 36*8 = 288 threads alone for IO. I don't know specifics about GlusterFS but
>> common knowledge suggests high thread counts are bad. You end up using all
>> your CPU waiting for locks and in context switches.
>> Why do you export each disk seperately? You don't seem to care about disk
>> failure so you could put all disks in one LVM VG and export LVs from that.
>>> 2010/7/19 Andre Felipe Machado<andremachado at techforce.com.br>:
>>>> Did you try to minimize or even NOT use any cache?
>>>> With so many nodes, the cache coherency between them may had become an
>>>> Andre Felipe Machado
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>> Gluster-users mailing list
>> Gluster-users at gluster.org
More information about the Gluster-users