[Bugs] [Bug 1362621] New: Ganesha crashes during multithreaded reads on v3 mounts

bugzilla at redhat.com bugzilla at redhat.com
Tue Aug 2 16:38:06 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1362621

            Bug ID: 1362621
           Summary: Ganesha crashes during multithreaded reads on v3
                    mounts
           Product: GlusterFS
           Version: 3.8.1
         Component: ganesha-nfs
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: asoman at redhat.com
                CC: bugs at gluster.org



Description of problem:
-----------------------

Ganesha crashed on 2/4 nodes during multithreaded Iozone reads from 4 clients
and 16 threads.

Exact Workload : iozone -+m <<config file> -+h <hostname> -C -w -c -e -i 1 -+n
-r 64k -s 8g -t 16

The same issue is reproducible once you create files on the mount point using
smallfile tool and try reading them in a multithreaded-distributed way.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

[root at gqas015 ~]# rpm -qa|grep ganesha

glusterfs-ganesha-3.8.1-0.4.git56fcf39.el7rhgs.x86_64
nfs-ganesha-2.4.0-0.14dev26.el7.centos.x86_64
nfs-ganesha-gluster-2.4.0-0.14dev26.el7.centos.x86_64
nfs-ganesha-debuginfo-2.4.0-0.14dev26.el7.centos.x86_64


How reproducible:
-----------------

2/4

Steps to Reproduce:
-------------------

1.  Setup consisted of 4 clients,4 servers.Mount gluster volume via v3.Each
server mounts from 1 client.

2.  Run multithreaded iozone sequential writes in a distributed way.

iozone -+m <<config file> -+h <hostname> -C -w -c -e -i 0 -+n -r 64k -s 8g -t
16

3 . Try running seq reads the same way

iozone -+m <<config file> -+h <hostname> -C -w -c -e -i 1 -+n -r 64k -s 8g -t
16

Actual results:
---------------

Ganesha crashed on 2/4 nodes

Expected results:
----------------

Ganesha should not crash.

Additional info:
----------------

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 9e8d9c1a-33da-4645-a6ad-630df25cb654
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
ganesha.enable: on
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable
[root at gqas015 ~]#

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list