[Gluster-users] Very slow directory listing and high CPU usage on replicated volume
Jules Wang
lancelotds at 163.com
Fri Nov 2 01:24:24 UTC 2012
Hi John:
Glusterfs is not designed for handling large count small files, because it has no meta data server, every lookup operation cost a lot in your situation.
The disk usage is abnormal, does your disk only have gluster bricks?
Best Regards.
Jules Wang
At 2012-11-02 08:03:21,"Jonathan Lefman" <jonathan.lefman at essess.com> wrote:
Hi all,
I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the drive usage is at 1.1T. I am not sure if this to be expected. The main problem is that directory listing (ls or find) takes a very long time. The CPU usage on the nodes is high for each of the glusterfsd processes - 3 on each machine 54%, 43%, and 25% per core is an example of the usage. Memory is very low for each process. It is incredibly difficult to diagnose this issue. We have wiped previous gluster installs, all directories, and mount points as well as reformatting the disks. Each drive is formatted with ext4.
Has anyone had a similar result? Any ideas on how to debug this one?
Thank you,
Jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121102/50b2bd6d/attachment.html>
More information about the Gluster-users
mailing list