[Gluster-users] Missing 'status fd' and 'top *-perf' details

Rumen Telbizov telbizov at gmail.com
Thu Feb 12 17:50:39 UTC 2015


Am I the only one experiencing this? Do you guys have proper statistics?

On Wed, Feb 11, 2015 at 1:29 PM, Rumen Telbizov <telbizov at gmail.com> wrote:

> Hello everyone,
>
> I have the following situation. I put some read and write load on my test
> GlusterFS setup as follows:
>
> # dd if=/dev/zero of=file2 bs=1M count=3000
> # cat file2 > /dev/null
>
>
> While doing the above I tried to gather some statistics and found out that
> 'status fd' doesn't really show anything and top read-perf/write-perf show
> only 0's. Here are the details:
>
> # gluster volume status myvolume fd
> FD tables for volume myvolume
> ----------------------------------------------
> Brick : 10.12.10.7:/var/lib/glusterfs_disks/disk01/brick
> ----------------------------------------------
> Brick : 10.12.10.8:/var/lib/glusterfs_disks/disk01/brick
> ----------------------------------------------
> Brick : 10.12.10.9:/var/lib/glusterfs_disks/disk01/brick
> ----------------------------------------------
>
>
>
> # gluster volume top myvolume write-perf
> Brick: 10.12.10.7:/var/lib/glusterfs_disks/disk01/brick
> MBps Filename                                        Time
> ==== ========                                        ====
>    0 /file2                                          2015-02-11 21:01:57.797129
>    0 /file2                                          2015-02-11 21:00:39.605479
>    0 /file1                                          2015-02-11 20:59:13.890372
>    0 /file2                                          2015-02-11 20:47:48.062088
>    0 /file2                                          2015-02-11 20:45:46.005462
>    0 /file                                           2015-02-11 18:25:19.961485
> Brick: 10.12.10.8:/var/lib/glusterfs_disks/disk01/brick
> MBps Filename                                        Time
> ==== ========                                        ====
>    0 /file2                                          2015-02-11 21:01:40.369140
>    0 /file2                                          2015-02-11 21:00:22.180878
>    0 /file1                                          2015-02-11 20:58:56.464305
>    0 /file2                                          2015-02-11 20:47:30.646403
>    0 /file2                                          2015-02-11 20:45:28.593213
>    0 /file                                           2015-02-11 18:25:02.669979
> Brick: 10.12.10.9:/var/lib/glusterfs_disks/disk01/brick
> MBps Filename                                        Time
> ==== ========                                        ====
>    0 /file2                                          2015-02-11 21:02:09.552385
>    0 /file2                                          2015-02-11 21:00:51.357701
>    0 /file1                                          2015-02-11 20:59:25.623650
>    0 /file2                                          2015-02-11 20:47:59.816884
>    0 /file2                                          2015-02-11 20:45:57.744475
>    0 /file                                           2015-02-11 18:25:31.733212
>
>
> My setup is:
>
> # glusterfs -V
> glusterfs 3.5.3 built on Nov 17 2014 15:48:52
>
>
> # gluster volume info
> Volume Name: myvolume
> Type: Replicate
> Volume ID: e513a56f-049f-4c8e-bc75-4fb789e06c37
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.12.10.7:/var/lib/glusterfs_disks/disk01/brick
> Brick2: 10.12.10.8:/var/lib/glusterfs_disks/disk01/brick
> Brick3: 10.12.10.9:/var/lib/glusterfs_disks/disk01/brick
> Options Reconfigured:
> network.ping-timeout: 10
> nfs.disable: on
> client.ssl: off
> server.ssl: off
>
>> ​Has anyone else experienced this?
>
> ​Regards,
> --
> Rumen Telbizov
> Unix Systems Administrator <http://telbizov.com>
>



-- 
Rumen Telbizov
Unix Systems Administrator <http://telbizov.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150212/71fa591b/attachment.html>


More information about the Gluster-users mailing list