[Bugs] [Bug 1221511] New: nfs-ganesha: OOM killed for nfsd process while executing the posix test suite

bugzilla at redhat.com bugzilla at redhat.com
Thu May 14 08:45:53 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1221511

            Bug ID: 1221511
           Summary: nfs-ganesha: OOM killed for nfsd process while
                    executing the posix test suite
           Product: GlusterFS
           Version: 3.7.0
         Component: ganesha-nfs
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: saujain at redhat.com



Created attachment 1025317
  --> https://bugzilla.redhat.com/attachment.cgi?id=1025317&action=edit
dmesg log from node1

Description of problem:
OOM kill for nfs-ganesha process

glusterfsd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0,
oom_score_adj=0
glusterfsd cpuset=/ mems_allowed=0
Pid: 31429, comm: glusterfsd Not tainted 2.6.32-504.12.2.el6.x86_64 #1
Call Trace:
 [<ffffffff810d40c1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
 [<ffffffff81127300>] ? dump_header+0x90/0x1b0
 [<ffffffff8122eb5c>] ? security_real_capable_noaudit+0x3c/0x70
 [<ffffffff81127782>] ? oom_kill_process+0x82/0x2a0
 [<ffffffff811276c1>] ? select_bad_process+0xe1/0x120
 [<ffffffff81127bc0>] ? out_of_memory+0x220/0x3c0
 [<ffffffff811344df>] ? __alloc_pages_nodemask+0x89f/0x8d0
 [<ffffffff8116c69a>] ? alloc_pages_current+0xaa/0x110
 [<ffffffff811246f7>] ? __page_cache_alloc+0x87/0x90
 [<ffffffff811240de>] ? find_get_page+0x1e/0xa0
 [<ffffffff81125697>] ? filemap_fault+0x1a7/0x500
 [<ffffffff8114eae4>] ? __do_fault+0x54/0x530
 [<ffffffff8114f0b7>] ? handle_pte_fault+0xf7/0xb00
 [<ffffffff8114fcea>] ? handle_mm_fault+0x22a/0x300
 [<ffffffff8104d0d8>] ? __do_page_fault+0x138/0x480
 [<ffffffff81041e98>] ? pvclock_clocksource_read+0x58/0xd0
 [<ffffffff8153003e>] ? do_page_fault+0x3e/0xa0
 [<ffffffff8152d3f5>] ? page_fault+0x25/0x30


[root at nfs2 ~]# gluster volume status
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1-share   49156     0          Y       3549 
Brick 10.70.37.77:/rhs/brick1/d1r2-share    49155     0          Y       3329 
Brick 10.70.37.76:/rhs/brick1/d2r1-share    49155     0          Y       3081 
Brick 10.70.37.69:/rhs/brick1/d2r2-share    49155     0          Y       3346 
Brick 10.70.37.148:/rhs/brick1/d3r1-share   49157     0          Y       3566 
Brick 10.70.37.77:/rhs/brick1/d3r2-share    49156     0          Y       3346 
Brick 10.70.37.76:/rhs/brick1/d4r1-share    49156     0          Y       3098 
Brick 10.70.37.69:/rhs/brick1/d4r2-share    49156     0          Y       3363 
Brick 10.70.37.148:/rhs/brick1/d5r1-share   49158     0          Y       3583 
Brick 10.70.37.77:/rhs/brick1/d5r2-share    49157     0          Y       3363 
Brick 10.70.37.76:/rhs/brick1/d6r1-share    49157     0          Y       3115 
Brick 10.70.37.69:/rhs/brick1/d6r2-share    49157     0          Y       3380 
Self-heal Daemon on localhost               N/A       N/A        Y       25893
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       28389
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       4784 
Self-heal Daemon on 10.70.37.148            N/A       N/A        Y       22717

Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1         49153     0          Y       22219
Brick 10.70.37.77:/rhs/brick1/d1r2          49152     0          Y       4321 
Brick 10.70.37.76:/rhs/brick1/d2r1          49152     0          Y       25654
Brick 10.70.37.69:/rhs/brick1/d2r2          49152     0          Y       27914
Brick 10.70.37.148:/rhs/brick1/d3r1         49154     0          Y       18842
Brick 10.70.37.77:/rhs/brick1/d3r2          49153     0          Y       4343 
Brick 10.70.37.76:/rhs/brick1/d4r1          49153     0          Y       25856
Brick 10.70.37.69:/rhs/brick1/d4r2          49153     0          Y       27934
Brick 10.70.37.148:/rhs/brick1/d5r1         49155     0          Y       22237
Brick 10.70.37.77:/rhs/brick1/d5r2          49154     0          Y       4361 
Brick 10.70.37.76:/rhs/brick1/d6r1          49154     0          Y       25874
Brick 10.70.37.69:/rhs/brick1/d6r2          49154     0          Y       27952
Self-heal Daemon on localhost               N/A       N/A        Y       25893
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       4784 
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       28389
Self-heal Daemon on 10.70.37.148            N/A       N/A        Y       22717

Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks


Version-Release number of selected component (if applicable):
glusterfs-3.7.0beta2-0.0.el6.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
OOM kill first time

Steps to Reproduce:
1. create a volume of type 6x2 and start it
2. start nfs-ganesha after completing the pre-requisites
3. mount the volume with vers=3
4. start executing posix testsuite

Actual results:
OOM kill seen for nfs-ganesha process

Expected results:
OOM kill should not be seen, as the test executed is posix testsuite

Additional info:

BZ 1221489 was filed with posix testsuite

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the Bugs mailing list