[Gluster-users] glusterfs/nfs OOM killed

Joe Julian joe at julianfamily.org
Thu Mar 20 18:45:46 UTC 2014


You might also be interested in 
http://backdrift.org/how-to-create-oom-killer-exceptions

On 03/20/2014 11:42 AM, Paul Robert Marino wrote:
> you ran out of ram
> tune your box or take stuff off of it if its running any thing else.
> when you run out of memory the kernel just kills whatever process it
> happens to find first it may or may not be the actuall process at
> fault.
>
> On Thu, Mar 20, 2014 at 9:42 AM, Jens Laas <jens.laas at uadm.uu.se> wrote:
>> 4GB server (RHEL6).
>> glusterfs-3.4.2-1.el6.x86_64 etc from gluster site.
>>
>> Copying files via NFS to gluster.
>>
>> Out of memory: Kill process 18225 (glusterfs) score 660 or sacrifice child
>> Killed process 18225, UID 0, (glusterfs) total-vm:3675904kB, anon-rss:3422940kB,
>> file-rss:2072kB
>>
>> [2014-03-20 13:19:24.951428] D [nfs3-helpers.c:1618:nfs3_log_common_call]
>> 0-nfs-nfsv3: XID: 9fd2b6b3, ACCESS: args: FH: exportid
>> c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid cc40b980-8a77-4cbb-ba23-147e37059a2d
>> [2014-03-20 13:19:24.951550] D [mem-pool.c:422:mem_get]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3svc_access+0x7c)
>> [0x7ffff3110b8c]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_access+0xfd)
>> [0x7ffff311082d]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_call_state_init+0x45)
>> [0x7ffff310b535]))) 0-mem-pool: Mem pool is full. Callocing mem
>> [2014-03-20 13:19:24.951684] D [afr-common.c:745:afr_get_call_child]
>> 0-gv0-replicate-0: Returning 0, call_child: 1, last_index: -1
>> [2014-03-20 13:19:24.952050] D [nfs3-helpers.c:3380:nfs3_log_common_res]
>> 0-nfs-nfsv3: XID: 9fd2b6b3, ACCESS: NFS: 0(Call completed successfully.), POSIX:
>> 7(Argument list too long)
>> [2014-03-20 13:19:24.952458] D [nfs3-helpers.c:1675:nfs3_log_create_call]
>> 0-nfs-nfsv3: XID: a0d2b6b3, CREATE: args: FH: exportid
>> c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid cc40b980-8a77-4cbb-ba23-147e37059a2d,
>> name: 1.0, mode: EXCLUSIVE
>> [2014-03-20 13:19:24.952522] D [mem-pool.c:422:mem_get]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3svc_create+0xa7)
>> [0x7ffff3115e27]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_create+0x38b)
>> [0x7ffff3115c5b]
>> (-->/usr/lib64/glusterfs/3.4.2/xlator/nfs/server.so(nfs3_call_state_init+0x45)
>> [0x7ffff310b535]))) 0-mem-pool: Mem pool is full. Callocing mem
>> [2014-03-20 13:19:24.954278] D
>> [afr-transaction.c:1144:afr_post_nonblocking_entrylk_cbk] 0-gv0-replicate-0: Non
>> blocking entrylks done. Proceeding to FOP
>> [2014-03-20 13:19:24.966742] D [afr-lk-common.c:447:transaction_lk_op]
>> 0-gv0-replicate-0: lk op is for a transaction
>> [2014-03-20 13:19:24.967190] D
>> [afr-transaction.c:1094:afr_post_nonblocking_inodelk_cbk] 0-gv0-replicate-0: Non
>> blocking inodelks done. Proceeding to FOP
>> [2014-03-20 13:19:24.967324] D [client-rpc-fops.c:2789:client_fdctx_destroy]
>> 0-gv0-client-0: sending release on fd
>> [2014-03-20 13:19:24.967362] D [client-rpc-fops.c:2789:client_fdctx_destroy]
>> 0-gv0-client-1: sending release on fd
>> [2014-03-20 13:19:24.967485] D [nfs3-helpers.c:3449:nfs3_log_newfh_res]
>> 0-nfs-nfsv3: XID: a0d2b6b3, CREATE: NFS: 0(Call completed successfully.), POSIX:
>> 0(Success), FH: exportid c172abbc-6cc8-4b65-ad35-c34f10b53869, gfid
>> be79d2ff-0339-435f-ac36-b313e089e245
>> [Thread 0x7ffff0e27700 (LWP 18236) exited]
>> [Thread 0x7ffff4852700 (LWP 18231) exited]
>> [Thread 0x7ffff568a700 (LWP 18230) exited]
>> [Thread 0x7ffff608b700 (LWP 18229) exited]
>> [Thread 0x7ffff6a8c700 (LWP 18228) exited]
>>
>> Program terminated with signal SIGKILL, Killed.
>> The program no longer exists.
>> (gdb)
>>
>> Regards,
>> Jens
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list