[Gluster-devel] gluster 3.7.8 page allocation failure

David Robinson david.robinson at corvidtec.com
Thu Feb 11 23:58:03 UTC 2016


I am sorting a fairly large file (27-million lines) and the output is 
being written to my gluster storage.  This seems to crash glusterfsd for 
3.7.8 as noted below.
Can anyone help?

David


[Thu Feb 11 18:25:24 2016] glusterfsd: page allocation failure. order:5, 
mode:0x20
[Thu Feb 11 18:25:24 2016] Pid: 17868, comm: glusterfsd Not tainted 
2.6.32-573.12.1.el6.x86_64 #1
[Thu Feb 11 18:25:24 2016] Call Trace:
[Thu Feb 11 18:25:24 2016]  [<ffffffff811376ac>] ? 
__alloc_pages_nodemask+0x7dc/0x950
[Thu Feb 11 18:25:24 2016]  [<ffffffffa02cba00>] ? 
mlx4_ib_post_send+0x6c0/0x1f90 [mlx4_ib]
[Thu Feb 11 18:25:24 2016]  [<ffffffffa037076c>] ? 
xfs_iext_bno_to_ext+0x8c/0x170 [xfs]
[Thu Feb 11 18:25:24 2016]  [<ffffffff81176f92>] ? 
kmem_getpages+0x62/0x170
[Thu Feb 11 18:25:24 2016]  [<ffffffff81177baa>] ? 
fallback_alloc+0x1ba/0x270
[Thu Feb 11 18:25:24 2016]  [<ffffffff811775ff>] ? 
cache_grow+0x2cf/0x320
[Thu Feb 11 18:25:24 2016]  [<ffffffff81177929>] ? 
____cache_alloc_node+0x99/0x160
[Thu Feb 11 18:25:24 2016]  [<ffffffff8145fdb2>] ? 
pskb_expand_head+0x62/0x280
[Thu Feb 11 18:25:24 2016]  [<ffffffff81178579>] ? __kmalloc+0x199/0x230
[Thu Feb 11 18:25:24 2016]  [<ffffffff8145fdb2>] ? 
pskb_expand_head+0x62/0x280
[Thu Feb 11 18:25:24 2016]  [<ffffffff812761c2>] ? 
get_request+0x302/0x3c0
[Thu Feb 11 18:25:24 2016]  [<ffffffff8146069a>] ? 
__pskb_pull_tail+0x2aa/0x360
[Thu Feb 11 18:25:24 2016]  [<ffffffff8146f9e9>] ? 
harmonize_features+0x29/0x70
[Thu Feb 11 18:25:24 2016]  [<ffffffff81470054>] ? 
dev_hard_start_xmit+0x1c4/0x490
[Thu Feb 11 18:25:24 2016]  [<ffffffff8148d53a>] ? 
sch_direct_xmit+0x15a/0x1c0
[Thu Feb 11 18:25:24 2016]  [<ffffffff814705c8>] ? 
dev_queue_xmit+0x228/0x320
[Thu Feb 11 18:25:24 2016]  [<ffffffff81476cbd>] ? 
neigh_connected_output+0xbd/0x100
[Thu Feb 11 18:25:24 2016]  [<ffffffff814ac217>] ? 
ip_finish_output+0x287/0x360
[Thu Feb 11 18:25:24 2016]  [<ffffffff814ac3a8>] ? ip_output+0xb8/0xc0
[Thu Feb 11 18:25:24 2016]  [<ffffffff814ab635>] ? 
ip_local_out+0x25/0x30
[Thu Feb 11 18:25:24 2016]  [<ffffffff814abb30>] ? 
ip_queue_xmit+0x190/0x420
[Thu Feb 11 18:25:24 2016]  [<ffffffff81136ff9>] ? 
__alloc_pages_nodemask+0x129/0x950
[Thu Feb 11 18:25:24 2016]  [<ffffffff814c1204>] ? 
tcp_transmit_skb+0x4b4/0x8b0
[Thu Feb 11 18:25:24 2016]  [<ffffffff814c374a>] ? 
tcp_write_xmit+0x1da/0xa90
[Thu Feb 11 18:25:24 2016]  [<ffffffff81178dbd>] ? 
__kmalloc_node+0x4d/0x60
[Thu Feb 11 18:25:24 2016]  [<ffffffff814c4030>] ? 
tcp_push_one+0x30/0x40
[Thu Feb 11 18:25:24 2016]  [<ffffffff814b46bc>] ? 
tcp_sendmsg+0x9cc/0xa20
[Thu Feb 11 18:25:24 2016]  [<ffffffff814589eb>] ? 
sock_aio_write+0x19b/0x1c0
[Thu Feb 11 18:25:24 2016]  [<ffffffff81458850>] ? 
sock_aio_write+0x0/0x1c0
[Thu Feb 11 18:25:24 2016]  [<ffffffff8119179b>] ? 
do_sync_readv_writev+0xfb/0x140
[Thu Feb 11 18:25:24 2016]  [<ffffffffa0345a66>] ? 
xfs_attr_get+0xb6/0xc0 [xfs]
[Thu Feb 11 18:25:24 2016]  [<ffffffffa039f7ef>] ? 
__xfs_xattr_get+0x2f/0x50 [xfs]
[Thu Feb 11 18:25:24 2016]  [<ffffffff810a1460>] ? 
autoremove_wake_function+0x0/0x40
[Thu Feb 11 18:25:24 2016]  [<ffffffff811ba34c>] ? getxattr+0x9c/0x170
[Thu Feb 11 18:25:24 2016]  [<ffffffff81231a16>] ? 
security_file_permission+0x16/0x20
[Thu Feb 11 18:25:24 2016]  [<ffffffff81192846>] ? 
do_readv_writev+0xd6/0x1f0
[Thu Feb 11 18:25:24 2016]  [<ffffffff811929a6>] ? vfs_writev+0x46/0x60
[Thu Feb 11 18:25:24 2016]  [<ffffffff81192ad1>] ? sys_writev+0x51/0xd0
[Thu Feb 11 18:25:24 2016]  [<ffffffff810e884e>] ? 
__audit_syscall_exit+0x25e/0x290
[Thu Feb 11 18:25:24 2016]  [<ffffffff8100b0d2>] ? 
system_call_fastpath+0x16/0x1b
[root at gfs02bkp ~]# gluster volume info
Volume Name: gfsbackup
Type: Distribute
Volume ID: e78d5123-d9bc-4d88-9c73-61d28abf0b41
Status: Started
Number of Bricks: 7
Transport-type: tcp
Bricks:
Brick1: gfsib01bkp.corvidtec.com:/data/brick01bkp/gfsbackup
Brick2: gfsib01bkp.corvidtec.com:/data/brick02bkp/gfsbackup
Brick3: gfsib02bkp.corvidtec.com:/data/brick01bkp/gfsbackup
Brick4: gfsib02bkp.corvidtec.com:/data/brick02bkp/gfsbackup
Brick5: gfsib02bkp.corvidtec.com:/data/brick03bkp/gfsbackup
Brick6: gfsib02bkp.corvidtec.com:/data/brick04bkp/gfsbackup
Brick7: gfsib02bkp.corvidtec.com:/data/brick05bkp/gfsbackup
Options Reconfigured:
nfs.disable: off
server.allow-insecure: on
storage.owner-gid: 100
server.manage-gids: on
cluster.lookup-optimize: on
server.event-threads: 8
client.event-threads: 8
changelog.changelog: off
storage.build-pgfid: on
performance.readdir-ahead: on
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
[root at gfs02bkp ~]# rpm -qa | grep gluster
glusterfs-fuse-3.7.8-1.el6.x86_64
glusterfs-geo-replication-3.7.8-1.el6.x86_64
python-gluster-3.7.8-1.el6.noarch
glusterfs-client-xlators-3.7.8-1.el6.x86_64
glusterfs-server-3.7.8-1.el6.x86_64
glusterfs-api-devel-3.7.8-1.el6.x86_64
glusterfs-debuginfo-3.7.8-1.el6.x86_64
glusterfs-3.7.8-1.el6.x86_64
glusterfs-cli-3.7.8-1.el6.x86_64
glusterfs-devel-3.7.8-1.el6.x86_64
glusterfs-rdma-3.7.8-1.el6.x86_64
glusterfs-libs-3.7.8-1.el6.x86_64
glusterfs-extra-xlators-3.7.8-1.el6.x86_64
glusterfs-api-3.7.8-1.el6.x86_64
glusterfs-resource-agents-3.7.8-1.el6.noarch


========================



David F. Robinson, Ph.D.

President - Corvid Technologies

145 Overhill Drive

Mooresville, NC 28117

704.799.6944 x101   [Office]

704.252.1310           [Cell]

704.799.7974           [Fax]

david.robinson at corvidtec.com

http://www.corvidtec.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160211/92df504b/attachment-0001.html>


More information about the Gluster-devel mailing list