[Gluster-users] swap + crash
paf1 at email.cz
paf1 at email.cz
Tue Jun 23 14:01:41 UTC 2015
Hello,
we have big problem with metadata IO error and swap issue
with:
OS = RHEL - 7 - 1.1503.el7.centos.2.8
kernel = 3.10.0 - 229.4.2.el7.x86_64
KVM Version = 2.1.2 - 23.el7_1.3.1
LIBVIRT = libvirt-1.2.8-16.el7_1.3
gluster = glusterfs-3.7.1-1.el7 ( replica 2 on XFS )
Phys.MEM = 128667 MB total, 23160 MB used, 105507 MB free
swap = 398334 MB total, 32049 MB used, 366285 MB free
Can anybody get any idea what's wrong ??
System swapping continualy in spite of a lot of free memory available. (
we added large temporary swap space )
Permanent write to /var/log/messages :
Journal: End of file while reading data: Input/output error
... and
Jun 23 15:46:04 1kvm1 journal: metadata not found: Requested metadata
element is not present
Jun 23 15:46:06 1kvm1 journal: metadata not found: Requested metadata
element is not present
Jun 23 15:46:09 1kvm1 kernel: swapper/16: page allocation failure:
order:2, mode:0x104020
Jun 23 15:46:09 1kvm1 kernel: CPU: 16 PID: 0 Comm: swapper/16 Tainted:
G I -------------- 3.10.0-229.4.2.el7.x86_64 #1
Jun 23 15:46:09 1kvm1 kernel: Hardware name: Supermicro
X10SRi-F/X10SRi-F, BIOS 1.0a 08/27/2014
Jun 23 15:46:09 1kvm1 kernel: 0000000000104020 04badf98503e3f59
ffff881fff403a00 ffffffff81604eaa
Jun 23 15:46:09 1kvm1 kernel: ffff881fff403a90 ffffffff8115c620
ffff88207fffd4e8 ffff881fff403a50
Jun 23 15:46:09 1kvm1 kernel: ffff88207ffd8000 ffff88207ffd8000
0000000000000002 04badf98503e3f59
Jun 23 15:46:09 1kvm1 kernel: Call Trace:
Jun 23 15:46:09 1kvm1 kernel: <IRQ> [<ffffffff81604eaa>]
dump_stack+0x19/0x1b
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8115c620>]
warn_alloc_failed+0x110/0x180
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff81160db8>]
__alloc_pages_nodemask+0x9a8/0xb90
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8155ab15>] ?
tcp_v4_do_rcv+0x1b5/0x470
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8119f549>]
alloc_pages_current+0xa9/0x170
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8115b59e>] __get_free_pages+0xe/0x50
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff811aa32e>]
kmalloc_order_trace+0x2e/0xa0
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff811acbc9>] __kmalloc+0x219/0x230
Jun 23 15:46:09 1kvm1 kernel: [<ffffffffa02f02ea>]
bnx2x_frag_alloc.isra.64+0x2a/0x40 [bnx2x]
Jun 23 15:46:09 1kvm1 kernel: [<ffffffffa02f15b4>]
bnx2x_alloc_rx_data.isra.71+0x54/0x1c0 [bnx2x]
Jun 23 15:46:09 1kvm1 kernel: [<ffffffffa02f33bd>]
bnx2x_rx_int+0x89d/0x1910 [bnx2x]
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8101bad9>] ? sched_clock+0x9/0x10
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff810acd84>] ? task_cputime+0x44/0x80
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff810400c2>] ?
x86_acpi_suspend_lowlevel+0x12/0x170
Jun 23 15:46:09 1kvm1 kernel: [<ffffffffa02f485a>] bnx2x_poll+0xfa/0x3c0
[bnx2x]
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff814fcfa2>] net_rx_action+0x152/0x240
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff81077bf7>] __do_softirq+0xf7/0x290
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8161671c>] call_softirq+0x1c/0x30
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff81015de5>] do_softirq+0x55/0x90
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff81077f95>] irq_exit+0x115/0x120
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff816172b8>] do_IRQ+0x58/0xf0
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8160c4ad>]
common_interrupt+0x6d/0x6d
Jun 23 15:46:09 1kvm1 kernel: <EOI> [<ffffffff814aaa32>] ?
cpuidle_enter_state+0x52/0xc0
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff814aaa28>] ?
cpuidle_enter_state+0x48/0xc0
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff814aab65>]
cpuidle_idle_call+0xc5/0x200
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff8101d21e>] arch_cpu_idle+0xe/0x30
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff810c6985>]
cpu_startup_entry+0xf5/0x290
Jun 23 15:46:09 1kvm1 kernel: [<ffffffff810423ca>]
start_secondary+0x1ba/0x230
Jun 23 15:46:16 1kvm1 journal: metadata not found: Requested metadata
element is not present
Jun 23 15:46:19 1kvm1 journal: metadata not found: Requested metadata
element is not present
regs. a lot
Paf1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150623/5f154f4c/attachment.html>
More information about the Gluster-users
mailing list