[Gluster-users] Sudden performance drop in gluster

Vijay Bellur vbellur at redhat.com
Sun Apr 16 14:38:37 UTC 2017


On Fri, Apr 14, 2017 at 3:35 PM, Pat Haley <phaley at mit.edu> wrote:

>
> This seems to have cleared itself.  For future reference though, what
> kinds of things should I look at do diagnose an issue like this?
>


Turning on gluster volume profile [1] and sampling the output of profile
info at periodic intervals would help. In addition you could also strace
the glusterfsd process and/or use `perf record` to determine what the
process is doing.

HTH,
Vijay

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/



> Thanks
>
>
>
> On 04/14/2017 01:16 PM, Pat Haley wrote:
>
>>
>> Hi,
>>
>> Today we suddenly experienced a performance drop in gluster:  e.g. doing
>> an "ls" of a directory with about 20 files takes about 5 minutes.  This is
>> way beyond (and seem separate from) some previous concerns we had.
>>
>> Our gluster filesystem is two bricks hosted on a single server. Logging
>> onto that server and doing "top" shows a load average of ~30.  In general,
>> no process is showing significant CPU usage except an occasional flash a
>> 3300% from glusterfsd.  The rest of our system is not doing any exceptional
>> data demands on the file system (i.e. we aren't suddenly running more jobs
>> than we were yesterday).
>>
>> Any thoughts on how we can proceed with debugging this will be greatly
>> appreciated.
>>
>> Some additional information:
>>
>> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>> CentOS release 6.8 (Final)
>>
>>
>> [root at mseas-data2 ~]# gluster volume status data-volume
>> Status of volume: data-volume
>> Gluster process                             TCP Port  RDMA Port Online
>> Pid
>> ------------------------------------------------------------------------------
>>
>> Brick mseas-data2:/mnt/brick1               49154 0 Y       5021
>> Brick mseas-data2:/mnt/brick2               49155 0 Y       5026
>>
>> Task Status of Volume data-volume
>> ------------------------------------------------------------------------------
>>
>> Task                 : Rebalance
>> ID                   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
>> Status               : completed
>>
>>
>> [root at mseas-data2 ~]# gluster volume info data-volume
>>
>> Volume Name: data-volume
>> Type: Distribute
>> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: mseas-data2:/mnt/brick1
>> Brick2: mseas-data2:/mnt/brick2
>> Options Reconfigured:
>> diagnostics.brick-sys-log-level: WARNING
>> performance.readdir-ahead: on
>> nfs.disable: on
>> nfs.export-volumes: off
>>
>>
>>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley                          Email:  phaley at mit.edu
> Center for Ocean Engineering       Phone:  (617) 253-6824
> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
> MIT, Room 5-213                    http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170416/1f14b8b2/attachment.html>


More information about the Gluster-users mailing list