[Gluster-users] How to diagnose volume rebalance failure?

Susant Palai spalai at redhat.com
Thu Dec 17 12:23:42 UTC 2015


Ok from your reply rebalance seems to be fine. 
So what you can do is check whether the mem-usage of brick process keeps increasing constantly. If that is the case take multiple state-dumps intermittently.

Regards,
Susant 

----- Original Message -----
From: "PuYun" <cloudor at 126.com>
To: "gluster-users" <gluster-users at gluster.org>
Cc: "gluster-users" <gluster-users at gluster.org>
Sent: Thursday, 17 December, 2015 3:57:12 PM
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure?



Hi Susant, 


Thank you for your instructions. I'll do that. 


My volume contains more than 2 million end sub directories. Most of the end sub directories contains 10~30 small files. Current total size is about 900G. Two bricks, each one is 1T. Current ram size is 8G. 


Previously I saw 3 processes, one is glusterfs for rebalance and 2 glusterfsd for bricks. Only 1 glusterfsd occupied very large mem and it is related to the newly added brick. The other 2 processes seems normal. If that happens again, I will send you the state dump. 


Thank you. 




PuYun 





From: Susant Palai 
Date: 2015-12-17 14:50 
To: PuYun 
CC: gluster-users 
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure? 

Hi PuYun, 
Would you be able to run rebalance again and take state-dumps in intervals when you see high mem-usages. Here is the details. 
##How to generate statedump 
We can find the directory where statedump files are created using 'gluster --print-statedumpdir' command. 
Create that directory if not already present based on the type of installation. 
Lets call this directory `statedump-directory`. 

We can generate statedump using 'kill -USR1 <pid-of-gluster-process>'. 
gluster-process is nothing but glusterd/glusterfs/glusterfsd process. 

I would like to know some more information. 

1) How big is your file system? [no. of files/dirs] 
2) What is the vm RAM size? 


Regards, 
Susant 

----- Original Message ----- 
From: "PuYun" <cloudor at 126.com> 
To: "gluster-users" <gluster-users at gluster.org> 
Sent: Wednesday, 16 December, 2015 8:30:57 PM 
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure? 



Hi, 


I have upgraded all my server/client gluster packages to version 3.7.6 and started reblance task again. It had been running much longer than before, but it got oom and failed again. 


===================== /var/log/messages ================== 
Dec 16 20:06:41 d001 kernel: glusterfsd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 
Dec 16 20:06:41 d001 kernel: glusterfsd cpuset=/ mems_allowed=0 
Dec 16 20:06:41 d001 kernel: Pid: 4843, comm: glusterfsd Not tainted 2.6.32-431.23.3.el6.x86_64 #1 
Dec 16 20:06:41 d001 kernel: Call Trace: 
Dec 16 20:06:41 d001 kernel: [<ffffffff810d0431>] ? cpuset_print_task_mems_allowed+0x91/0xb0 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122810>] ? dump_header+0x90/0x1b0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8122833c>] ? security_real_capable_noaudit+0x3c/0x70 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122c92>] ? oom_kill_process+0x82/0x2a0 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122bd1>] ? select_bad_process+0xe1/0x120 
Dec 16 20:06:41 d001 kernel: [<ffffffff811230d0>] ? out_of_memory+0x220/0x3c0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8112f9ef>] ? __alloc_pages_nodemask+0x89f/0x8d0 
Dec 16 20:06:41 d001 kernel: [<ffffffff811678ea>] ? alloc_pages_current+0xaa/0x110 
Dec 16 20:06:41 d001 kernel: [<ffffffff8111fc07>] ? __page_cache_alloc+0x87/0x90 
Dec 16 20:06:41 d001 kernel: [<ffffffff8111f5ee>] ? find_get_page+0x1e/0xa0 
Dec 16 20:06:41 d001 kernel: [<ffffffff81120ba7>] ? filemap_fault+0x1a7/0x500 
Dec 16 20:06:41 d001 kernel: [<ffffffff81149ed4>] ? __do_fault+0x54/0x530 
Dec 16 20:06:41 d001 kernel: [<ffffffff8114a4a7>] ? handle_pte_fault+0xf7/0xb00 
Dec 16 20:06:41 d001 kernel: [<ffffffff810aee5e>] ? futex_wake+0x10e/0x120 
Dec 16 20:06:41 d001 kernel: [<ffffffff8114b0da>] ? handle_mm_fault+0x22a/0x300 
Dec 16 20:06:41 d001 kernel: [<ffffffff8104a8d8>] ? __do_page_fault+0x138/0x480 
Dec 16 20:06:41 d001 kernel: [<ffffffff8103f9d8>] ? pvclock_clocksource_read+0x58/0xd0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8152e74e>] ? do_page_fault+0x3e/0xa0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8152bb05>] ? page_fault+0x25/0x30 
Dec 16 20:06:41 d001 kernel: Mem-Info: 

Dec 16 20:06:41 d001 kernel: Node 0 DMA per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 6: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32 per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 186, btch: 31 usd: 14 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 186, btch: 31 usd: 152 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 186, btch: 31 usd: 108 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 186, btch: 31 usd: 70 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 186, btch: 31 usd: 152 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 6: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: Node 0 Normal per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 186, btch: 31 usd: 145 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 186, btch: 31 usd: 19 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 186, btch: 31 usd: 33 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 186, btch: 31 usd: 20 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 186, btch: 31 usd: 165 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 6: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 186, btch: 31 usd: 0 

Dec 16 20:06:41 d001 kernel: active_anon:1955964 inactive_anon:38 isolated_anon:0 
Dec 16 20:06:41 d001 kernel: active_file:312 inactive_file:1262 isolated_file:0 
Dec 16 20:06:41 d001 kernel: unevictable:0 dirty:1 writeback:3 unstable:0 
Dec 16 20:06:41 d001 kernel: free:25745 slab_reclaimable:2412 slab_unreclaimable:7815 
Dec 16 20:06:41 d001 kernel: mapped:208 shmem:43 pagetables:4679 bounce:0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA free:15752kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15364kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 3000 8050 8050 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32 free:45044kB min:25140kB low:31424kB high:37708kB active_anon:2740816kB inactive_anon:0kB active_file:896kB inactive_file:4176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3072096kB mlocked:0kB dirty:0kB writeback:4kB mapped:816kB shmem:0kB slab_reclaimable:1636kB slab_unreclaimable:1888kB kernel_stack:128kB pagetables:5204kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1664 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 0 5050 5050 
Dec 16 20:06:41 d001 kernel: Node 0 Normal free:42184kB min:42316kB low:52892kB high:63472kB active_anon:5083040kB inactive_anon:152kB active_file:352kB inactive_file:872kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:5171200kB mlocked:0kB dirty:4kB writeback:8kB mapped:16kB shmem:172kB slab_reclaimable:8012kB slab_unreclaimable:29372kB kernel_stack:2240kB pagetables:13512kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1131 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 0 0 0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA: 2*4kB 2*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15752kB 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32: 11044*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 44176kB 
Dec 16 20:06:41 d001 kernel: Node 0 Normal: 10515*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 42060kB 
Dec 16 20:06:41 d001 kernel: 1782 total pagecache pages 

Dec 16 20:06:41 d001 kernel: 0 pages in swap cache 
Dec 16 20:06:41 d001 kernel: Swap cache stats: add 0, delete 0, find 0/0 
Dec 16 20:06:41 d001 kernel: Free swap = 0kB 
Dec 16 20:06:41 d001 kernel: Total swap = 0kB 
Dec 16 20:06:41 d001 kernel: 2097151 pages RAM 
Dec 16 20:06:41 d001 kernel: 81926 pages reserved 
Dec 16 20:06:41 d001 kernel: 924 pages shared 
Dec 16 20:06:41 d001 kernel: 1984896 pages non-shared 
Dec 16 20:06:41 d001 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name 
Dec 16 20:06:41 d001 kernel: [ 477] 0 477 2662 105 0 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 916] 0 916 374 48 0 0 0 aliyun-service 
Dec 16 20:06:41 d001 kernel: [ 1156] 0 1156 62798 192 0 0 0 rsyslogd 
Dec 16 20:06:41 d001 kernel: [ 1178] 32 1178 4744 62 0 0 0 rpcbind 
Dec 16 20:06:41 d001 kernel: [ 1198] 29 1198 5837 112 1 0 0 rpc.statd 
Dec 16 20:06:41 d001 kernel: [ 1382] 28 1382 157544 113 1 0 0 nscd 
Dec 16 20:06:41 d001 kernel: [ 1414] 0 1414 118751 699 0 0 0 AliYunDunUpdate 
Dec 16 20:06:41 d001 kernel: [ 1448] 0 1448 16657 178 0 -17 -1000 sshd 
Dec 16 20:06:41 d001 kernel: [ 1463] 38 1463 6683 152 0 0 0 ntpd 
Dec 16 20:06:41 d001 kernel: [ 1473] 0 1473 29325 154 0 0 0 crond 
Dec 16 20:06:41 d001 kernel: [ 1516] 0 1516 1016 19 1 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1518] 0 1518 1016 17 3 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1520] 0 1520 1016 18 5 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1522] 0 1522 2661 104 1 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 1523] 0 1523 2661 104 4 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 1524] 0 1524 1016 18 2 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1526] 0 1526 1016 19 4 0 0 mingetty 

Dec 16 20:06:41 d001 kernel: [ 1528] 0 1528 1016 19 1 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1652] 0 1652 191799 1191 0 0 0 AliYunDun 
Dec 16 20:06:41 d001 kernel: [ 1670] 0 1670 249011 1149 0 0 0 AliHids 
Dec 16 20:06:41 d001 kernel: [ 4546] 0 4546 185509 4817 1 0 0 glusterd 
Dec 16 20:06:41 d001 kernel: [ 4697] 0 4697 429110 35780 1 0 0 glusterfsd 
Dec 16 20:06:41 d001 kernel: [ 4715] 0 4715 2149944 1788310 0 0 0 glusterfsd 
Dec 16 20:06:41 d001 kernel: [ 4830] 0 4830 137846 6463 0 0 0 glusterfs 
Dec 16 20:06:41 d001 kernel: [ 4940] 0 4940 341517 116710 1 0 0 glusterfs 
Dec 16 20:06:41 d001 kernel: Out of memory: Kill process 4715 (glusterfsd) score 859 or sacrifice child 
Dec 16 20:06:41 d001 kernel: Killed process 4715, UID 0, (glusterfsd) total-vm:8599776kB, anon-rss:7152896kB, file-rss:344kB 
Dec 16 20:06:41 d001 kernel: glusterfsd invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0 
Dec 16 20:06:41 d001 kernel: glusterfsd cpuset=/ mems_allowed=0 
Dec 16 20:06:41 d001 kernel: Pid: 4717, comm: glusterfsd Not tainted 2.6.32-431.23.3.el6.x86_64 #1 
Dec 16 20:06:41 d001 kernel: Call Trace: 
Dec 16 20:06:41 d001 kernel: [<ffffffff810d0431>] ? cpuset_print_task_mems_allowed+0x91/0xb0 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122810>] ? dump_header+0x90/0x1b0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8122833c>] ? security_real_capable_noaudit+0x3c/0x70 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122c92>] ? oom_kill_process+0x82/0x2a0 
Dec 16 20:06:41 d001 kernel: [<ffffffff81122bd1>] ? select_bad_process+0xe1/0x120 
Dec 16 20:06:41 d001 kernel: [<ffffffff811230d0>] ? out_of_memory+0x220/0x3c0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8112f9ef>] ? __alloc_pages_nodemask+0x89f/0x8d0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8116e2d2>] ? kmem_getpages+0x62/0x170 
Dec 16 20:06:41 d001 kernel: [<ffffffff8116eeea>] ? fallback_alloc+0x1ba/0x270 
Dec 16 20:06:41 d001 kernel: [<ffffffff8116e93f>] ? cache_grow+0x2cf/0x320 
Dec 16 20:06:41 d001 kernel: [<ffffffff8116ec69>] ? ____cache_alloc_node+0x99/0x160 
Dec 16 20:06:41 d001 kernel: [<ffffffff8116fbeb>] ? kmem_cache_alloc+0x11b/0x190 
Dec 16 20:06:41 d001 kernel: [<ffffffff810efb75>] ? taskstats_exit+0x305/0x390 

Dec 16 20:06:41 d001 kernel: [<ffffffff81076c27>] ? do_exit+0x157/0x870 
Dec 16 20:06:41 d001 kernel: [<ffffffff81060aa3>] ? perf_event_task_sched_out+0x33/0x70 
Dec 16 20:06:41 d001 kernel: [<ffffffff81077398>] ? do_group_exit+0x58/0xd0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8108cd46>] ? get_signal_to_deliver+0x1f6/0x460 
Dec 16 20:06:41 d001 kernel: [<ffffffff8100a265>] ? do_signal+0x75/0x800 
Dec 16 20:06:41 d001 kernel: [<ffffffff8108c85a>] ? dequeue_signal+0xda/0x170 
Dec 16 20:06:41 d001 kernel: [<ffffffff8108cb40>] ? sys_rt_sigtimedwait+0x250/0x260 
Dec 16 20:06:41 d001 kernel: [<ffffffff81077087>] ? do_exit+0x5b7/0x870 
Dec 16 20:06:41 d001 kernel: [<ffffffff8100aa80>] ? do_notify_resume+0x90/0xc0 
Dec 16 20:06:41 d001 kernel: [<ffffffff8100b341>] ? int_signal+0x12/0x17 
Dec 16 20:06:41 d001 kernel: Mem-Info: 
Dec 16 20:06:41 d001 kernel: Node 0 DMA per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 6: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 0, btch: 1 usd: 0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32 per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 186, btch: 31 usd: 14 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 186, btch: 31 usd: 152 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 186, btch: 31 usd: 108 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 186, btch: 31 usd: 70 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 186, btch: 31 usd: 152 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 186, btch: 31 usd: 0 

Dec 16 20:06:41 d001 kernel: CPU 6: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: Node 0 Normal per-cpu: 
Dec 16 20:06:41 d001 kernel: CPU 0: hi: 186, btch: 31 usd: 145 
Dec 16 20:06:41 d001 kernel: CPU 1: hi: 186, btch: 31 usd: 19 
Dec 16 20:06:41 d001 kernel: CPU 2: hi: 186, btch: 31 usd: 33 
Dec 16 20:06:41 d001 kernel: CPU 3: hi: 186, btch: 31 usd: 50 
Dec 16 20:06:41 d001 kernel: CPU 4: hi: 186, btch: 31 usd: 165 
Dec 16 20:06:41 d001 kernel: CPU 5: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 6: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: CPU 7: hi: 186, btch: 31 usd: 0 
Dec 16 20:06:41 d001 kernel: active_anon:1955964 inactive_anon:38 isolated_anon:0 
Dec 16 20:06:41 d001 kernel: active_file:312 inactive_file:1262 isolated_file:0 
Dec 16 20:06:41 d001 kernel: unevictable:0 dirty:1 writeback:3 unstable:0 
Dec 16 20:06:41 d001 kernel: free:25745 slab_reclaimable:2412 slab_unreclaimable:7815 
Dec 16 20:06:41 d001 kernel: mapped:208 shmem:43 pagetables:4679 bounce:0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA free:15752kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15364kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 3000 8050 8050 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32 free:45044kB min:25140kB low:31424kB high:37708kB active_anon:2740816kB inactive_anon:0kB active_file:896kB inactive_file:4176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3072096kB mlocked:0kB dirty:0kB writeback:4kB mapped:816kB shmem:0kB slab_reclaimable:1636kB slab_unreclaimable:1888kB kernel_stack:128kB pagetables:5204kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1664 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 0 5050 5050 

Dec 16 20:06:41 d001 kernel: Node 0 Normal free:42184kB min:42316kB low:52892kB high:63472kB active_anon:5083040kB inactive_anon:152kB active_file:352kB inactive_file:872kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:5171200kB mlocked:0kB dirty:4kB writeback:8kB mapped:16kB shmem:172kB slab_reclaimable:8012kB slab_unreclaimable:29372kB kernel_stack:2240kB pagetables:13512kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1131 all_unreclaimable? yes 
Dec 16 20:06:41 d001 kernel: lowmem_reserve[]: 0 0 0 0 
Dec 16 20:06:41 d001 kernel: Node 0 DMA: 2*4kB 2*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15752kB 
Dec 16 20:06:41 d001 kernel: Node 0 DMA32: 11044*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 44176kB 
Dec 16 20:06:41 d001 kernel: Node 0 Normal: 10484*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 41936kB 
Dec 16 20:06:41 d001 kernel: 1782 total pagecache pages 
Dec 16 20:06:41 d001 kernel: 0 pages in swap cache 
Dec 16 20:06:41 d001 kernel: Swap cache stats: add 0, delete 0, find 0/0 
Dec 16 20:06:41 d001 kernel: Free swap = 0kB 
Dec 16 20:06:41 d001 kernel: Total swap = 0kB 
Dec 16 20:06:41 d001 kernel: 2097151 pages RAM 
Dec 16 20:06:41 d001 kernel: 81926 pages reserved 
Dec 16 20:06:41 d001 kernel: 931 pages shared 
Dec 16 20:06:41 d001 kernel: 1984884 pages non-shared 
Dec 16 20:06:41 d001 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name 
Dec 16 20:06:41 d001 kernel: [ 477] 0 477 2662 105 0 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 916] 0 916 374 48 0 0 0 aliyun-service 
Dec 16 20:06:41 d001 kernel: [ 1156] 0 1156 62798 192 0 0 0 rsyslogd 

Dec 16 20:06:41 d001 kernel: [ 1178] 32 1178 4744 62 0 0 0 rpcbind 
Dec 16 20:06:41 d001 kernel: [ 1198] 29 1198 5837 112 1 0 0 rpc.statd 
Dec 16 20:06:41 d001 kernel: [ 1382] 28 1382 157544 113 1 0 0 nscd 
Dec 16 20:06:41 d001 kernel: [ 1414] 0 1414 118751 699 0 0 0 AliYunDunUpdate 
Dec 16 20:06:41 d001 kernel: [ 1448] 0 1448 16657 178 0 -17 -1000 sshd 
Dec 16 20:06:41 d001 kernel: [ 1463] 38 1463 6683 152 0 0 0 ntpd 
Dec 16 20:06:41 d001 kernel: [ 1473] 0 1473 29325 154 0 0 0 crond 
Dec 16 20:06:41 d001 kernel: [ 1516] 0 1516 1016 19 1 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1518] 0 1518 1016 17 3 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1520] 0 1520 1016 18 5 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1522] 0 1522 2661 104 1 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 1523] 0 1523 2661 104 4 -17 -1000 udevd 
Dec 16 20:06:41 d001 kernel: [ 1524] 0 1524 1016 18 2 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1526] 0 1526 1016 19 4 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1528] 0 1528 1016 19 1 0 0 mingetty 
Dec 16 20:06:41 d001 kernel: [ 1652] 0 1652 191799 1203 0 0 0 AliYunDun 
Dec 16 20:06:41 d001 kernel: [ 1670] 0 1670 249011 1160 0 0 0 AliHids 
Dec 16 20:06:41 d001 kernel: [ 4546] 0 4546 185509 4817 1 0 0 glusterd 
Dec 16 20:06:41 d001 kernel: [ 4697] 0 4697 429110 35780 1 0 0 glusterfsd 
Dec 16 20:06:41 d001 kernel: [ 4717] 0 4715 2149944 1788310 4 0 0 glusterfsd 
Dec 16 20:06:41 d001 kernel: [ 4830] 0 4830 137846 6463 0 0 0 glusterfs 
Dec 16 20:06:41 d001 kernel: [ 4940] 0 4940 341517 116710 1 0 0 glusterfs 
===================== <EOF> ================================== 



PuYun 





From: PuYun 
Date: 2015-12-15 22:10 
To: gluster-users 
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure? 


Hi, 


I find this bug link https://bugzilla.redhat.com/show_bug.cgi?id=1261234 . My version is 3.7.4 which is older than the fixed version 3.7.5. 
I'll upgrade my gluster version and try again later. 


Thank you. 



PuYun 












_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list