[Gluster-users] 100% CPU WAIT
Tom van Leeuwen
tom.van.leeuwen at saasplaza.com
Wed Oct 8 07:53:38 UTC 2014
About the version I'm running:
$ glusterfs --version
glusterfs 3.4.1 built on Oct 28 2013 11:01:57
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
$ rpm -qa | grep gluster
glusterfs-3.4.1-3.el6.x86_64
glusterfs-cli-3.4.1-3.el6.x86_64
glusterfs-libs-3.4.1-3.el6.x86_64
glusterfs-fuse-3.4.1-3.el6.x86_64
glusterfs-server-3.4.1-3.el6.x86_64
On 08-10-14 09:50, Tom van Leeuwen wrote:
> Hi Pranith, sure! Which logfile(s) would you be interested in and
> should I remove some sensitive information like username/password
> which I see in glustershd.log for example?
>
> On 08-10-14 07:46, Pranith Kumar Karampuri wrote:
>>
>> On 10/03/2014 03:30 PM, Tom van Leeuwen wrote:
>>> Hi guys,
>>>
>>> My glusterfs is causing 100% CPU WAIT according to `top`.
>>> This has been going on for hours and I have no idea what is causing
>>> it. How can I troubleshoot?
>>>
>>> Iotop reports this:
>>> Total DISK READ: 268.60 K/s | Total DISK WRITE: 0.00 B/s
>>> TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
>>> 7899 be/4 root 268.60 K/s 0.00 B/s 0.00 % 96.70 %
>>> glusterfsd -s server01 --volfile-id myvol.server01.glusterfs-brick1
>>> -p /var/lib/glusterd/vols/myvol/run/server01-glusterfs-brick1.pid -S
>>> /var/run/a7562806405853d2b9382d6fc59051cc.socket --brick-name
>>> /glusterfs/brick1 -l /var/log/glusterfs/bricks/glusterfs-brick1.log
>>> --xlator-option
>>> *-posix.glusterd-uuid=07acd5b2-85e6-46f1-8477-038028e8ef7f
>>> --brick-port 49152 --xlator-option myvol-server.listen-port=49152
>>> 1885 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.98 %
>>> glusterfsd -s server01 --volfile-id myvol.server01.glusterfs-brick1
>>> -p /var/lib/glusterd/vols/myvol/run/server01-glusterfs-brick1.pid -S
>>> /var/run/a7562806405853d2b9382d6fc59051cc.socket --brick-name
>>> /glusterfs/brick1 -l /var/log/glusterfs/bricks/glusterfs-brick1.log
>>> --xlator-option
>>> *-posix.glusterd-uuid=07acd5b2-85e6-46f1-8477-038028e8ef7f
>>> --brick-port 49152 --xlator-option myvol-server.listen-port=49152
>>
>> Could you provide us with the logs please.
>>
>> Pranith
>>>
>>> Kind regards,
>>> Tom van Leeuwen
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141008/11ffb9bf/attachment.html>
More information about the Gluster-users
mailing list