[Gluster-devel] 3.6.8 glusterfsd processes not responding

Raghavendra G raghavendra at gluster.com
Tue Feb 16 05:21:30 UTC 2016


Had missed out gluster-devel.

On Tue, Feb 16, 2016 at 10:50 AM, Raghavendra G <raghavendra at gluster.com>
wrote:

> The thread is blocked on ctx->lock. Looking at the definition of
> pl_ctx_get, I found that two operations
>
> 1. Check whether ctx is not present.
> 2. Create and set the ctx.
>
> are not atomic. This can result in a non-zero ctx (ctx1) being overwritten
> (by say ctx2) by a racing thread. And if somebody has already acquired lock
> ctx1, they would be doing an unlock on ctx2 resulting in a corrupted lock.
> This can result in hangs. Below is the definition of pl_ctx_get:
>
> pl_ctx_t*
> pl_ctx_get (client_t *client, xlator_t *xlator)
> {
>         void *tmp = NULL;
>   pl_ctx_t *ctx = NULL;
>
> client_ctx_get (client, xlator, &tmp);
>
> ctx = tmp;
>
> if (ctx != NULL)
>        goto out;
>
> ctx = GF_CALLOC (1, sizeof (pl_ctx_t), gf_locks_mt_posix_lock_t);
>
> if (ctx == NULL)
> goto out;
>
>         pthread_mutex_init (&ctx->lock, NULL);
> INIT_LIST_HEAD (&ctx->inodelk_lockers);
> INIT_LIST_HEAD (&ctx->entrylk_lockers);
>
>         if (client_ctx_set (client, xlator, ctx) != 0) {
>                 pthread_mutex_destroy (&ctx->lock);
>        GF_FREE (ctx);
>                 ctx = NULL;
> }
> out:
>     return ctx;
> }
>
> Though this is a bug, I am not sure whether this is the RCA for the issue
> pointed out in this thread. I'll send out a patch for the issue identified
> here.
>
>
> On Sat, Feb 13, 2016 at 8:46 AM, Joe Julian <joe at julianfamily.org> wrote:
>
>> I've also got several glusterfsd processes that have stopped responding.
>>
>> A backtrace from a live core, strace, and state dump follow:
>>
>> Thread 10 (LWP 31587):
>> #0  0x00007f81d384289c in __lll_lock_wait () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81d383e065 in _L_lock_858 () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #2  0x00007f81d383deba in pthread_mutex_lock () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #3  0x00007f81ce600ff8 in pl_inodelk_client_cleanup ()
>>    from /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/features/locks.so
>> #4  0x00007f81ce5fe84a in ?? () from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/features/locks.so
>> #5  0x00007f81d3ef573d in gf_client_disconnect () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #6  0x00007f81cd74e270 in server_connection_cleanup ()
>>    from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/protocol/server.so
>> #7  0x00007f81cd7486ec in server_rpc_notify ()
>>    from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/protocol/server.so
>> #8  0x00007f81d3c70f1b in rpcsvc_handle_disconnect () from
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0
>> #9  0x00007f81d3c710b0 in rpcsvc_notify () from
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0
>> #10 0x00007f81d3c74257 in rpc_transport_notify () from
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0
>> #11 0x00007f81cf4d4077 in ?? () from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/rpc-transport/socket.so
>> #12 0x00007f81d3ef793b in ?? () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #13 0x00007f81d4348f71 in main ()
>>
>> Thread 9 (LWP 3385):
>> #0  0x00007f81d353408d in nanosleep () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #1  0x00007f81d3533f2c in sleep () from /lib/x86_64-linux-gnu/libc.so.6
>> #2  0x00007f81cee50c38 in ?? () from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/storage/posix.so
>> #3  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #4  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x0000000000000000 in ?? ()
>>
>> Thread 8 (LWP 20656):
>> #0  0x00007f81d38400fe in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>    from /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81ce3ea032 in iot_worker ()
>>    from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/performance/io-threads.so
>> #2  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #3  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x0000000000000000 in ?? ()
>>
>> Thread 7 (LWP 31881):
>> #0  0x00007f81d383fd84 in pthread_cond_wait@@GLIBC_2.3.2 () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81cee50f3b in posix_fsyncer_pick ()
>>    from /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/storage/posix.so
>> #2  0x00007f81cee51155 in posix_fsyncer ()
>>    from /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/storage/posix.so
>> #3  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #4  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x0000000000000000 in ?? ()
>>
>> Thread 6 (LWP 31880):
>> #0  0x00007f81d38400fe in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>    from /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81cee4de5a in ?? () from
>> /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/storage/posix.so
>> ---Type <return> to continue, or q <return> to quit---
>> #2  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #3  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x0000000000000000 in ?? ()
>>
>> Thread 5 (LWP 31842):
>> #0  0x00007f81d383fd84 in pthread_cond_wait@@GLIBC_2.3.2 () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81cdfd77fb in index_worker ()
>>    from /usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/features/index.so
>> #2  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #3  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x0000000000000000 in ?? ()
>>
>> Thread 4 (LWP 31591):
>> #0  0x00007f81d38400fe in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>    from /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81d3eddfe3 in syncenv_task () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #2  0x00007f81d3ede440 in syncenv_processor () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #3  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #4  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x0000000000000000 in ?? ()
>>
>> Thread 3 (LWP 31590):
>> #0  0x00007f81d38400fe in pthread_cond_timedwait@@GLIBC_2.3.2 ()
>>    from /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81d3eddfe3 in syncenv_task () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #2  0x00007f81d3ede440 in syncenv_processor () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #3  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #4  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x0000000000000000 in ?? ()
>>
>> Thread 2 (LWP 31589):
>> #0  0x00007f81d38439f7 in do_sigwait () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81d3843a79 in sigwait () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #2  0x00007f81d434be12 in glusterfs_sigwaiter ()
>> #3  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #4  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #5  0x0000000000000000 in ?? ()
>>
>> Thread 1 (LWP 31588):
>> #0  0x00007f81d384352d in nanosleep () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x00007f81d3ebc4dc in gf_timer_proc () from
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0
>> #2  0x00007f81d383be9a in start_thread () from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #3  0x00007f81d35683fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x0000000000000000 in ?? ()
>>
>>
>>
>> Process 31587 attached with 10 threads - interrupt to quit
>> [pid  3385] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 20656] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 31881] 03:09:19 futex(0x7f81d5f538bc, FUTEX_WAIT_PRIVATE, 23707,
>> NULL <unfinished ...>
>> [pid 31880] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 31842] 03:09:19 futex(0x7f81d5f4889c, FUTEX_WAIT_PRIVATE, 41409,
>> NULL <unfinished ...>
>> [pid 31591] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 31590] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 31589] 03:09:19 rt_sigtimedwait([HUP INT USR1 USR2 TERM], NULL,
>> NULL, 8 <unfinished ...>
>> [pid 31588] 03:09:19 restart_syscall(<... resuming interrupted call ...>
>> <unfinished ...>
>> [pid 31587] 03:09:19 futex(0x7f81900023b0, FUTEX_WAIT_PRIVATE, 2, NULL
>> <unfinished ...>
>> [pid 31588] 03:09:19 <... restart_syscall resumed> ) = 0
>> [pid 31588] 03:09:19 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:20 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:21 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:22 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:23 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:24 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:25 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:26 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:27 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:28 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:29 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:30 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:31 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:32 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:33 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:34 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:35 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:36 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:37 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:38 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:39 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:40 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:41 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:42 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:43 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:44 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:45 nanosleep({1, 0},  <unfinished ...>
>> [pid  3385] 03:09:46 <... restart_syscall resumed> ) = 0
>> [pid  3385] 03:09:46
>> open("/gluster/brick03/cinder-std-01/.glusterfs/health_check",
>> O_RDWR|O_CREAT, 0644) = 27
>> [pid  3385] 03:09:46 write(27, "2016-02-13 03:09:46", 19) = 19
>> [pid  3385] 03:09:46 lseek(27, 0, SEEK_SET) = 0
>> [pid  3385] 03:09:46 read(27, "2016-02-13 03:09:46", 19) = 19
>> [pid  3385] 03:09:46 close(27)          = 0
>> [pid  3385] 03:09:46 rt_sigprocmask(SIG_BLOCK, [CHLD], ~[ILL ABRT BUS FPE
>> KILL SEGV STOP SYS RTMIN RT_1], 8) = 0
>> [pid  3385] 03:09:46 nanosleep({30, 0},  <unfinished ...>
>> [pid 31588] 03:09:46 <... nanosleep resumed> NULL) = 0
>> [pid 31588] 03:09:46 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:47 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:48 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:49 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:50 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:51 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:52 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:53 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:54 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:55 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:56 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:57 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:58 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:09:59 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:10:00 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:10:01 nanosleep({1, 0}, NULL) = 0
>> [pid 31588] 03:10:02 nanosleep({1, 0}, NULL) = 0
>>
>>
>>
>>
>>
>>
>> DUMP-START-TIME: 2016-02-13 02:42:44.670135
>>
>> [mallinfo]
>> mallinfo_arena=1187840
>> mallinfo_ordblks=260
>> mallinfo_smblks=1
>> mallinfo_hblks=12
>> mallinfo_hblkhd=16060416
>> mallinfo_usmblks=0
>> mallinfo_fsmblks=112
>> mallinfo_uordblks=1036864
>> mallinfo_fordblks=150976
>> mallinfo_keepcost=97488
>>
>> [global.glusterfs - Memory usage]
>> num_types=122
>>
>> [global.glusterfs - usage-type gf_common_mt_dnscache6 memusage]
>> size=16
>> num_allocs=1
>> max_size=16
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_event_pool memusage]
>> size=144
>> num_allocs=1
>> max_size=144
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_reg memusage]
>> size=393216
>> num_allocs=1
>> max_size=393216
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_epoll_event memusage]
>> size=3096
>> num_allocs=1
>> max_size=3096
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_fd_ctx memusage]
>> size=0
>> num_allocs=0
>> max_size=192
>> max_num_allocs=1
>> total_allocs=2
>>
>> [global.glusterfs - usage-type gf_common_mt_inode_ctx memusage]
>> size=0
>> num_allocs=0
>> max_size=288
>> max_num_allocs=1
>> total_allocs=32
>>
>> [global.glusterfs - usage-type gf_common_mt_xlator_t memusage]
>> size=28600
>> num_allocs=11
>> max_size=57200
>> max_num_allocs=22
>> total_allocs=55
>>
>> [global.glusterfs - usage-type gf_common_mt_xlator_list_t memusage]
>> size=320
>> num_allocs=20
>> max_size=640
>> max_num_allocs=40
>> total_allocs=100
>>
>> [global.glusterfs - usage-type gf_common_mt_volume_opt_list_t memusage]
>> size=312
>> num_allocs=13
>> max_size=576
>> max_num_allocs=24
>> total_allocs=57
>>
>> [global.glusterfs - usage-type gf_common_mt_gf_timer_t memusage]
>> size=112
>> num_allocs=2
>> max_size=168
>> max_num_allocs=3
>> total_allocs=39761
>>
>> [global.glusterfs - usage-type gf_common_mt_gf_timer_registry_t memusage]
>> size=168
>> num_allocs=1
>> max_size=168
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_iobuf memusage]
>> size=109536
>> num_allocs=8
>> max_size=109536
>> max_num_allocs=8
>> total_allocs=8
>>
>> [global.glusterfs - usage-type gf_common_mt_iobuf_arena memusage]
>> size=1872
>> num_allocs=9
>> max_size=1872
>> max_num_allocs=9
>> total_allocs=9
>>
>> [global.glusterfs - usage-type gf_common_mt_iobref memusage]
>> size=0
>> num_allocs=0
>> max_size=48
>> max_num_allocs=2
>> total_allocs=39
>>
>> [global.glusterfs - usage-type gf_common_mt_iobuf_pool memusage]
>> size=1776
>> num_allocs=1
>> max_size=1776
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_memdup memusage]
>> size=0
>> num_allocs=0
>> max_size=118
>> max_num_allocs=7
>> total_allocs=7
>>
>> [global.glusterfs - usage-type gf_common_mt_asprintf memusage]
>> size=265
>> num_allocs=12
>> max_size=680
>> max_num_allocs=90
>> total_allocs=524
>>
>> [global.glusterfs - usage-type gf_common_mt_strdup memusage]
>> size=3781
>> num_allocs=172
>> max_size=4264
>> max_num_allocs=196
>> total_allocs=708
>>
>> [global.glusterfs - usage-type gf_common_mt_socket_private_t memusage]
>> size=1344
>> num_allocs=3
>> max_size=1344
>> max_num_allocs=3
>> total_allocs=3
>>
>> [global.glusterfs - usage-type gf_common_mt_ioq memusage]
>> size=0
>> num_allocs=0
>> max_size=312
>> max_num_allocs=1
>> total_allocs=14
>>
>> [global.glusterfs - usage-type gf_common_mt_char memusage]
>> size=13504
>> num_allocs=151
>> max_size=35426
>> max_num_allocs=217
>> total_allocs=698
>>
>> [global.glusterfs - usage-type gf_common_mt_mem_pool memusage]
>> size=1200
>> num_allocs=10
>> max_size=1200
>> max_num_allocs=10
>> total_allocs=10
>>
>> [global.glusterfs - usage-type gf_common_mt_long memusage]
>> size=9034400
>> num_allocs=10
>> max_size=9034400
>> max_num_allocs=10
>> total_allocs=10
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcsvc_auth_list memusage]
>> size=288
>> num_allocs=4
>> max_size=288
>> max_num_allocs=4
>> total_allocs=4
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcsvc_t memusage]
>> size=200
>> num_allocs=1
>> max_size=200
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcsvc_program_t memusage]
>> size=240
>> num_allocs=2
>> max_size=240
>> max_num_allocs=2
>> total_allocs=2
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcsvc_listener_t memusage]
>> size=160
>> num_allocs=1
>> max_size=160
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcsvc_wrapper_t memusage]
>> size=32
>> num_allocs=1
>> max_size=32
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcclnt_t memusage]
>> size=296
>> num_allocs=1
>> max_size=296
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcclnt_savedframe_t memusage]
>> size=200
>> num_allocs=1
>> max_size=200
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_rpc_trans_t memusage]
>> size=8352
>> num_allocs=3
>> max_size=8352
>> max_num_allocs=3
>> total_allocs=3
>>
>> [global.glusterfs - usage-type gf_common_mt_rpc_trans_pollin_t memusage]
>> size=0
>> num_allocs=0
>> max_size=296
>> max_num_allocs=1
>> total_allocs=25
>>
>> [global.glusterfs - usage-type gf_common_mt_rpc_trans_reqinfo_t memusage]
>> size=0
>> num_allocs=0
>> max_size=64
>> max_num_allocs=1
>> total_allocs=13
>>
>> [global.glusterfs - usage-type gf_common_mt_glusterfs_graph_t memusage]
>> size=192
>> num_allocs=1
>> max_size=384
>> max_num_allocs=2
>> total_allocs=5
>>
>> [global.glusterfs - usage-type gf_common_mt_rpcclnt_cb_program_t memusage]
>> size=88
>> num_allocs=1
>> max_size=88
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_cliententry_t memusage]
>> size=2048
>> num_allocs=1
>> max_size=2048
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_clienttable_t memusage]
>> size=32
>> num_allocs=1
>> max_size=32
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gf_common_mt_iobrefs memusage]
>> size=0
>> num_allocs=0
>> max_size=256
>> max_num_allocs=2
>> total_allocs=39
>>
>> [global.glusterfs - usage-type gf_sock_mt_lock_array memusage]
>> size=1640
>> num_allocs=1
>> max_size=1640
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gfd_mt_server_cmdline_t memusage]
>> size=40
>> num_allocs=1
>> max_size=40
>> max_num_allocs=1
>> total_allocs=1
>>
>> [global.glusterfs - usage-type gfd_mt_xlator_cmdline_option_t memusage]
>> size=80
>> num_allocs=2
>> max_size=80
>> max_num_allocs=2
>> total_allocs=2
>>
>> [global.glusterfs - usage-type gfd_mt_char memusage]
>> size=53
>> num_allocs=4
>> max_size=53
>> max_num_allocs=4
>> total_allocs=4
>>
>> [global.glusterfs - usage-type gfd_mt_call_pool_t memusage]
>> size=48
>> num_allocs=1
>> max_size=48
>> max_num_allocs=1
>> total_allocs=1
>>
>> [mempool]
>> -----=-----
>> pool-name=gv-cinder-server:fd_t
>> hot-count=2
>> cold-count=1022
>> padded_sizeof=108
>> alloc-count=27465
>> max-alloc=6
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-server:dentry_t
>> hot-count=30
>> cold-count=16354
>> padded_sizeof=84
>> alloc-count=1689
>> max-alloc=31
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-server:inode_t
>> hot-count=32
>> cold-count=16352
>> padded_sizeof=156
>> alloc-count=23516
>> max-alloc=35
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-changelog:changelog_local_t
>> hot-count=0
>> cold-count=64
>> padded_sizeof=116
>> alloc-count=0
>> max-alloc=0
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-locks:pl_local_t
>> hot-count=0
>> cold-count=32
>> padded_sizeof=148
>> alloc-count=119146
>> max-alloc=5
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-marker:marker_local_t
>> hot-count=0
>> cold-count=128
>> padded_sizeof=332
>> alloc-count=25101
>> max-alloc=2
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-quota:quota_local_t
>> hot-count=0
>> cold-count=64
>> padded_sizeof=412
>> alloc-count=0
>> max-alloc=0
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=gv-cinder-server:rpcsvc_request_t
>> hot-count=0
>> cold-count=512
>> padded_sizeof=2828
>> alloc-count=134951621
>> max-alloc=43
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:struct saved_frame
>> hot-count=0
>> cold-count=8
>> padded_sizeof=124
>> alloc-count=13
>> max-alloc=3
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:struct rpc_req
>> hot-count=0
>> cold-count=8
>> padded_sizeof=588
>> alloc-count=13
>> max-alloc=3
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:rpcsvc_request_t
>> hot-count=0
>> cold-count=8
>> padded_sizeof=2828
>> alloc-count=1
>> max-alloc=1
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:log_buf_t
>> hot-count=0
>> cold-count=256
>> padded_sizeof=140
>> alloc-count=1
>> max-alloc=1
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:data_t
>> hot-count=152
>> cold-count=16232
>> padded_sizeof=52
>> alloc-count=88386649
>> max-alloc=219
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:data_pair_t
>> hot-count=110
>> cold-count=16274
>> padded_sizeof=68
>> alloc-count=2103720
>> max-alloc=191
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:dict_t
>> hot-count=58
>> cold-count=4038
>> padded_sizeof=140
>> alloc-count=85559811
>> max-alloc=101
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:call_stub_t
>> hot-count=0
>> cold-count=1024
>> padded_sizeof=3756
>> alloc-count=135291560
>> max-alloc=27
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:call_stack_t
>> hot-count=0
>> cold-count=1024
>> padded_sizeof=1836
>> alloc-count=134886246
>> max-alloc=43
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>> -----=-----
>> pool-name=glusterfs:call_frame_t
>> hot-count=0
>> cold-count=4096
>> padded_sizeof=172
>> alloc-count=455460023
>> max-alloc=156
>> pool-misses=0
>> cur-stdalloc=0
>> max-stdalloc=0
>>
>> [iobuf.global]
>> iobuf_pool=0x7f81d5ebc980
>> iobuf_pool.default_page_size=131072
>> iobuf_pool.arena_size=12976128
>> iobuf_pool.arena_cnt=8
>> iobuf_pool.request_misses=0
>>
>> [purge.1]
>> purge.1.mem_base=0x7f81d430f000
>> purge.1.active_cnt=0
>> purge.1.passive_cnt=1024
>> purge.1.alloc_cnt=318736820
>> purge.1.max_active=12
>> purge.1.page_size=128
>>
>> [purge.2]
>> purge.2.mem_base=0x7f81d42cf000
>> purge.2.active_cnt=0
>> purge.2.passive_cnt=512
>> purge.2.alloc_cnt=85775611
>> purge.2.max_active=8
>> purge.2.page_size=512
>>
>> [purge.3]
>> purge.3.mem_base=0x7f81d41cf000
>> purge.3.active_cnt=0
>> purge.3.passive_cnt=512
>> purge.3.alloc_cnt=25248
>> purge.3.max_active=1
>> purge.3.page_size=2048
>>
>> [purge.4]
>> purge.4.mem_base=0x7f81d2976000
>> purge.4.active_cnt=0
>> purge.4.passive_cnt=128
>> purge.4.alloc_cnt=1208704
>> purge.4.max_active=11
>> purge.4.page_size=8192
>>
>> [purge.5]
>> purge.5.mem_base=0x7f81d2776000
>> purge.5.active_cnt=0
>> purge.5.passive_cnt=64
>> purge.5.alloc_cnt=330835
>> purge.5.max_active=6
>> purge.5.page_size=32768
>>
>> [purge.6]
>> purge.6.mem_base=0x7f81d2376000
>> purge.6.active_cnt=0
>> purge.6.passive_cnt=32
>> purge.6.alloc_cnt=47899288
>> purge.6.max_active=4
>> purge.6.page_size=131072
>>
>> [purge.7]
>> purge.7.mem_base=0x7f81d2176000
>> purge.7.active_cnt=0
>> purge.7.passive_cnt=8
>> purge.7.alloc_cnt=212
>> purge.7.max_active=1
>> purge.7.page_size=262144
>>
>> [arena.8]
>> arena.8.mem_base=0x7f81d1f76000
>> arena.8.active_cnt=0
>> arena.8.passive_cnt=2
>> arena.8.alloc_cnt=0
>> arena.8.max_active=0
>> arena.8.page_size=1048576
>>
>> [global.callpool]
>> callpool_address=0x7f81d5ed8630
>> callpool.cnt=0
>>
>> [active graph - 5]
>>
>> [protocol/server.gv-cinder-server - Memory usage]
>> num_types=127
>>
>> [protocol/server.gv-cinder-server - usage-type 0 memusage]
>> size=0
>> num_allocs=0
>> max_size=96
>> max_num_allocs=3
>> total_allocs=450
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_fdentry_t
>> memusage]
>> size=51200
>> num_allocs=25
>> max_size=81920
>> max_num_allocs=40
>> total_allocs=403
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_fdtable_t
>> memusage]
>> size=1536
>> num_allocs=24
>> max_size=2496
>> max_num_allocs=39
>> total_allocs=150
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_fd_ctx
>> memusage]
>> size=384
>> num_allocs=2
>> max_size=1152
>> max_num_allocs=6
>> total_allocs=27465
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_inode_ctx
>> memusage]
>> size=1728
>> num_allocs=6
>> max_size=2592
>> max_num_allocs=9
>> total_allocs=23477
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_list_head
>> memusage]
>> size=1273488
>> num_allocs=2
>> max_size=1273488
>> max_num_allocs=2
>> total_allocs=2
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_inode_table_t
>> memusage]
>> size=200
>> num_allocs=1
>> max_size=200
>> max_num_allocs=1
>> total_allocs=1
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_volume_opt_list_t memusage]
>> size=264
>> num_allocs=11
>> max_size=264
>> max_num_allocs=11
>> total_allocs=11
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_auth_handle_t
>> memusage]
>> size=48
>> num_allocs=2
>> max_size=48
>> max_num_allocs=2
>> total_allocs=10
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_iobref
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=1056
>> max_num_allocs=44
>> total_allocs=221069677
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_memdup
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=779
>> max_num_allocs=35
>> total_allocs=1937699
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_asprintf
>> memusage]
>> size=144
>> num_allocs=5
>> max_size=3125
>> max_num_allocs=10
>> total_allocs=834957
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_strdup
>> memusage]
>> size=10972
>> num_allocs=284
>> max_size=11475
>> max_num_allocs=303
>> total_allocs=84802827
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_socket_private_t memusage]
>> size=11200
>> num_allocs=25
>> max_size=17920
>> max_num_allocs=40
>> total_allocs=151
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_ioq memusage]
>> size=0
>> num_allocs=0
>> max_size=936
>> max_num_allocs=3
>> total_allocs=134951621
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_char memusage]
>> size=149
>> num_allocs=7
>> max_size=3525
>> max_num_allocs=61
>> total_allocs=173345106
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_mem_pool
>> memusage]
>> size=480
>> num_allocs=4
>> max_size=480
>> max_num_allocs=4
>> total_allocs=4
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_long memusage]
>> size=5490688
>> num_allocs=4
>> max_size=5490688
>> max_num_allocs=4
>> total_allocs=4
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_rpcsvc_auth_list memusage]
>> size=288
>> num_allocs=4
>> max_size=288
>> max_num_allocs=4
>> total_allocs=4
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_rpcsvc_t
>> memusage]
>> size=200
>> num_allocs=1
>> max_size=200
>> max_num_allocs=1
>> total_allocs=1
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_rpcsvc_program_t memusage]
>> size=360
>> num_allocs=3
>> max_size=360
>> max_num_allocs=3
>> total_allocs=3
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_rpcsvc_listener_t memusage]
>> size=160
>> num_allocs=1
>> max_size=160
>> max_num_allocs=1
>> total_allocs=1
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_rpcsvc_wrapper_t memusage]
>> size=64
>> num_allocs=2
>> max_size=64
>> max_num_allocs=2
>> total_allocs=128
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_rpc_trans_t
>> memusage]
>> size=69600
>> num_allocs=25
>> max_size=111360
>> max_num_allocs=40
>> total_allocs=151
>>
>> [protocol/server.gv-cinder-server - usage-type
>> gf_common_mt_rpc_trans_pollin_t memusage]
>> size=0
>> num_allocs=0
>> max_size=296
>> max_num_allocs=1
>> total_allocs=134951621
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_fd_lk_ctx_t
>> memusage]
>> size=48
>> num_allocs=2
>> max_size=144
>> max_num_allocs=6
>> total_allocs=27465
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_client_t
>> memusage]
>> size=3072
>> num_allocs=48
>> max_size=4992
>> max_num_allocs=78
>> total_allocs=300
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_client_ctx
>> memusage]
>> size=3072
>> num_allocs=24
>> max_size=4992
>> max_num_allocs=39
>> total_allocs=150
>>
>> [protocol/server.gv-cinder-server - usage-type gf_common_mt_iobrefs
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=5632
>> max_num_allocs=44
>> total_allocs=221069677
>>
>> [protocol/server.gv-cinder-server - usage-type gf_server_mt_server_conf_t
>> memusage]
>> size=50088
>> num_allocs=25
>> max_size=50568
>> max_num_allocs=40
>> total_allocs=151
>>
>> [protocol/server.gv-cinder-server - usage-type gf_server_mt_state_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=87376
>> max_num_allocs=43
>> total_allocs=134886335
>>
>> [protocol/server.gv-cinder-server - usage-type gf_server_mt_dirent_rsp_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=3040
>> max_num_allocs=19
>> total_allocs=26251
>>
>> [protocol/server.gv-cinder-server - usage-type gf_server_mt_rsp_buf_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=961
>> max_num_allocs=19
>> total_allocs=644
>>
>> [xlator.protocol.server.priv]
>> server.total-bytes-read=11424142224
>> server.total-bytes-write=3014661265649
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.hashsize=14057
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.name
>> =/gluster/brick03/cinder-std-01/inode
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.lru_limit=16384
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.active_size=3
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.lru_size=29
>> conn.2.bound_xl./gluster/brick03/cinder-std-01.purge_size=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.active.1]
>> gfid=8f787374-3b1a-4a0e-8ec5-41f41724c22d
>> nlookup=10372
>> fd-count=1
>> ref=4
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-820679f7-4d6a-4397-a522-6fff941ab862
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.active.2]
>> gfid=b8b58762-fd76-4d23-9683-50abf6447d2e
>> nlookup=11355
>> fd-count=1
>> ref=1
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-ad3357ba-c264-4e58-a1ac-1d90cb0f960d
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.active.3]
>> gfid=00000000-0000-0000-0000-000000000001
>> nlookup=13
>> fd-count=0
>> ref=1
>> ia_type=2
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.1]
>> gfid=8bbcfe88-b91e-4643-bb7a-49d2e7a15bcf
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.2]
>> gfid=97a98f35-d5c2-4a2c-b900-cacb0b74720e
>> nlookup=13
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.3]
>> gfid=4ffec429-231f-4e27-a227-72388d607eed
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-afa9bc2c-5666-422b-88c4-2cb6b1f1fcaa
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.4]
>> gfid=1e2f6504-5c49-4dda-9a20-5f47f11b7f78
>> nlookup=13
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.5]
>> gfid=ddd4bec4-d73b-4649-ab05-63c7219a9c2c
>> nlookup=13
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.6]
>> gfid=02b80a30-76d8-4c30-9575-6fbec905d8d3
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.7]
>> gfid=43bc1a78-c9b7-471b-9721-4da560745a6e
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.8]
>> gfid=396b00de-be60-4700-b617-3d3edfc5aeed
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.9]
>> gfid=9f654186-ca48-4e08-8fdd-dd86c9e5dbce
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.10]
>> gfid=09d93e82-012b-4e06-abee-34c0d05ca6e8
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.11]
>> gfid=ba5986bb-7e18-4d5f-a9e7-04aa08e732c1
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-8a05b90e-820a-4b97-817c-35d9e3a32c20
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.12]
>> gfid=e298ffe1-49bd-473e-8707-56031118b196
>> nlookup=30
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.13]
>> gfid=f83ac264-2cca-4ef6-bbf9-25e0c1c2e307
>> nlookup=30
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.14]
>> gfid=22f1caad-537b-4309-b9ca-b23068bc5ca6
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.15]
>> gfid=6ae80d7a-76b2-4650-be55-7bee4fb5fd49
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.16]
>> gfid=d0e7b11a-ca37-4905-ba87-2b7e5b2e7198
>> nlookup=30
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.17]
>> gfid=a8bdac1f-39e3-4d5a-8b96-0da4900dfa91
>> nlookup=30
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.18]
>> gfid=edb13fa6-59a3-4374-8645-a17e319559d1
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.19]
>> gfid=e13a41d1-2945-486f-9016-8e8ec9e5889d
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-3aee038f-2ba9-4195-883c-62978b0d394d
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.20]
>> gfid=fc021567-c228-4a7b-ad47-b4b4eaca7fd7
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-3aee038f-2ba9-4195-883c-62978b0d394d.info
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.21]
>> gfid=337d82da-1104-4f2c-9ad1-900265386532
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-92d4c940-409a-4bfa-ad1f-9c87690dc0f0
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.22]
>> gfid=989060f7-6bb1-4052-a811-c3136eae0395
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-4d03e51a-da7f-4a53-ae85-6f05e36d4d07
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.23]
>> gfid=07398a28-6a92-4243-a0e5-d3e03eba4fa5
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.24]
>> gfid=50d3c83a-d670-4359-939c-3601abc6ef07
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.25]
>> gfid=5b810982-131a-4ab7-8986-034e1f5fec3e
>> nlookup=17
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.26]
>> gfid=cfc8019d-69c7-45fb-b95c-1dddbe4e24bb
>> nlookup=26
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-4d46d6d6-3d34-4243-bb86-e4faeae5ba75
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.27]
>> gfid=99c346af-7edf-4978-b020-af70eba07607
>> nlookup=7013
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-cdf2ad03-e114-4f73-b38d-117c08f2545c
>> mandatory=0
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.28]
>> gfid=c5915f07-8d1f-45df-8409-f3ea0f7b3da9
>> nlookup=5768
>> fd-count=0
>> ref=0
>> ia_type=2
>>
>> [conn.2.bound_xl./gluster/brick03/cinder-std-01.lru.29]
>> gfid=4a7eb4bb-10a9-43b9-a4dd-d8256697b044
>> nlookup=6981
>> fd-count=0
>> ref=0
>> ia_type=1
>>
>> [xlator.features.locks.gv-cinder-locks.inode]
>> path=/vol-f0e0740f-4e07-4a62-a2b4-9e6ce18496a5
>> mandatory=0
>> conn.2.id
>> =controller01-8035-2016/02/11-21:48:33:714247-gv-cinder-client-2-0-0
>> conn.2.ref=1
>> conn.2.bound_xl=/gluster/brick03/cinder-std-01
>> conn.4.id
>> =compute015-24816-2016/02/11-21:56:23:422938-gv-cinder-client-2-0-0
>> conn.4.ref=1
>> conn.4.bound_xl=/gluster/brick03/cinder-std-01
>> conn.6.id
>> =compute013-12886-2016/02/11-22:41:00:857700-gv-cinder-client-2-0-0
>> conn.6.ref=1
>> conn.6.bound_xl=/gluster/brick03/cinder-std-01
>> conn.7.id
>> =compute015-20555-2016/02/11-21:54:15:924117-gv-cinder-client-2-0-0
>> conn.7.ref=1
>> conn.7.bound_xl=/gluster/brick03/cinder-std-01
>> conn.8.id
>> =storage07-29222-2016/02/12-08:16:20:52268-gv-cinder-client-2-0-0
>> conn.8.ref=1
>> conn.8.bound_xl=/gluster/brick03/cinder-std-01
>> conn.9.id=storage07-31653-2016/01/29-06:43:27:2595-gv-cinder-client-2-0-2
>> conn.9.ref=1
>> conn.9.bound_xl=/gluster/brick03/cinder-std-01
>> conn.11.id
>> =compute015-7236-2016/02/11-22:03:52:865440-gv-cinder-client-2-0-0
>> conn.11.ref=1
>> conn.11.bound_xl=/gluster/brick03/cinder-std-01
>> conn.13.id
>> =storage03-7760-2016/02/11-20:45:51:151440-gv-cinder-client-2-0-0
>> conn.13.ref=1
>> conn.13.bound_xl=/gluster/brick03/cinder-std-01
>> conn.14.id
>> =compute021-12040-2016/02/11-22:49:59:497964-gv-cinder-client-2-0-0
>> conn.14.ref=1
>> conn.14.bound_xl=/gluster/brick03/cinder-std-01
>> conn.19.id
>> =compute016-3457-2016/02/11-23:05:18:739493-gv-cinder-client-2-0-0
>> conn.19.ref=1
>> conn.19.bound_xl=/gluster/brick03/cinder-std-01
>> conn.21.id
>> =compute011-17350-2016/02/12-00:04:46:704199-gv-cinder-client-2-0-0
>> conn.21.ref=1
>> conn.21.bound_xl=/gluster/brick03/cinder-std-01
>> conn.22.id
>> =compute014-2359-2016/02/11-22:37:08:48705-gv-cinder-client-2-0-0
>> conn.22.ref=1
>> conn.22.bound_xl=/gluster/brick03/cinder-std-01
>> conn.23.id
>> =storage04-23521-2016/02/11-20:45:50:904106-gv-cinder-client-2-0-0
>> conn.23.ref=1
>> conn.23.bound_xl=/gluster/brick03/cinder-std-01
>> conn.25.id
>> =storage08-20934-2016/02/11-20:45:47:836024-gv-cinder-client-2-0-0
>> conn.25.ref=1
>> conn.25.bound_xl=/gluster/brick03/cinder-std-01
>> conn.26.id
>> =storage04-23562-2016/02/11-20:45:51:830750-gv-cinder-client-2-0-0
>> conn.26.ref=1
>> conn.26.bound_xl=/gluster/brick03/cinder-std-01
>> conn.27.id
>> =compute018-31965-2016/02/11-21:35:27:794540-gv-cinder-client-2-0-0
>> conn.27.ref=1
>> conn.27.bound_xl=/gluster/brick03/cinder-std-01
>> conn.28.id
>> =compute018-16969-2016/02/11-21:43:33:877210-gv-cinder-client-2-0-0
>> conn.28.ref=1
>> conn.28.bound_xl=/gluster/brick03/cinder-std-01
>> conn.29.id
>> =compute017-21189-2016/02/11-22:53:39:559720-gv-cinder-client-2-0-0
>> conn.29.ref=1
>> conn.29.bound_xl=/gluster/brick03/cinder-std-01
>> conn.31.id
>> =compute018-4161-2016/02/11-21:37:35:723582-gv-cinder-client-2-0-0
>> conn.31.ref=1
>> conn.31.bound_xl=/gluster/brick03/cinder-std-01
>> conn.34.id
>> =storage03-1915-2016/02/02-16:51:43:144253-gv-cinder-client-2-0-1
>> conn.34.ref=1
>> conn.34.bound_xl=/gluster/brick03/cinder-std-01
>> conn.35.id
>> =storage08-20965-2016/02/11-20:45:48:848917-gv-cinder-client-2-0-0
>> conn.35.ref=1
>> conn.35.bound_xl=/gluster/brick03/cinder-std-01
>> conn.36.id
>> =compute019-15495-2016/02/11-22:58:56:961354-gv-cinder-client-2-0-0
>> conn.36.ref=1
>> conn.36.bound_xl=/gluster/brick03/cinder-std-01
>> conn.37.id
>> =storage04-29111-2016/02/11-18:26:13:30717-gv-cinder-client-2-0-0
>> conn.37.ref=1
>> conn.37.bound_xl=/gluster/brick03/cinder-std-01
>> conn.38.id
>> =storage03-7806-2016/02/11-20:45:52:54804-gv-cinder-client-2-0-0
>> conn.38.ref=1
>> conn.38.bound_xl=/gluster/brick03/cinder-std-01
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - Memory usage]
>> num_types=121
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_common_mt_asprintf memusage]
>> size=0
>> num_allocs=0
>> max_size=109
>> max_num_allocs=1
>> total_allocs=7
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_common_mt_strdup memusage]
>> size=258
>> num_allocs=6
>> max_size=406
>> max_num_allocs=10
>> total_allocs=10239
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=154
>> max_num_allocs=1
>> total_allocs=7
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_io_stats_mt_ios_conf memusage]
>> size=5424
>> num_allocs=1
>> max_size=5424
>> max_num_allocs=1
>> total_allocs=1
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_io_stats_mt_ios_fd memusage]
>> size=1104
>> num_allocs=2
>> max_size=2760
>> max_num_allocs=5
>> total_allocs=9361
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_io_stats_mt_ios_stat memusage]
>> size=928
>> num_allocs=14
>> max_size=1080
>> max_num_allocs=15
>> total_allocs=888
>>
>> [debug/io-stats./gluster/brick03/cinder-std-01 - usage-type
>> gf_io_stats_mt_ios_stat_list memusage]
>> size=512
>> num_allocs=16
>> max_size=512
>> max_num_allocs=16
>> total_allocs=16
>> cumulative.data_read=6270431573049
>> cumulative.data_written=4366462976
>> incremental.data_read=6270431573049
>> incremental.data_written=4366462976
>> /gluster/brick03/cinder-std-01.cumulative.NULL=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.NULL=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.STAT=399758,32878860,21.000,9986.000,82.247
>>
>> /gluster/brick03/cinder-std-01.incremental.STAT=399758,32878860,21.000,9986.000,82.247
>> /gluster/brick03/cinder-std-01.cumulative.READLINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.READLINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.MKNOD=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.MKNOD=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.MKDIR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.MKDIR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.UNLINK=1659,210449,32.000,375.000,126.853
>>
>> /gluster/brick03/cinder-std-01.incremental.UNLINK=1659,210449,32.000,375.000,126.853
>> /gluster/brick03/cinder-std-01.cumulative.RMDIR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.RMDIR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.SYMLINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.SYMLINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.RENAME=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.RENAME=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.LINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.LINK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.TRUNCATE=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.TRUNCATE=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.OPEN=8487,926077,36.000,260.000,109.117
>>
>> /gluster/brick03/cinder-std-01.incremental.OPEN=8487,926077,36.000,260.000,109.117
>> /gluster/brick03/cinder-std-01.cumulative.READ=48833565,12708669647
>> ,14.000,1602328.000,260.245
>> /gluster/brick03/cinder-std-01.incremental.READ=48833565,12708669647
>> ,14.000,1602328.000,260.245
>>
>> /gluster/brick03/cinder-std-01.cumulative.WRITE=303078,62499624,17.000,172831.000,206.216
>>
>> /gluster/brick03/cinder-std-01.incremental.WRITE=303078,62499624,17.000,172831.000,206.216
>>
>> /gluster/brick03/cinder-std-01.cumulative.STATFS=11584,1266007,21.000,55380.000,109.289
>>
>> /gluster/brick03/cinder-std-01.incremental.STATFS=11584,1266007,21.000,55380.000,109.289
>>
>> /gluster/brick03/cinder-std-01.cumulative.FLUSH=882,54202,16.000,221.000,61.454
>>
>> /gluster/brick03/cinder-std-01.incremental.FLUSH=882,54202,16.000,221.000,61.454
>>
>> /gluster/brick03/cinder-std-01.cumulative.FSYNC=63714,6442491979,50.000,906161.000,101115.798
>>
>> /gluster/brick03/cinder-std-01.incremental.FSYNC=63714,6442491979,50.000,906161.000,101115.798
>> /gluster/brick03/cinder-std-01.cumulative.SETXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.SETXATTR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.GETXATTR=25101,2796339,19.000,59253.000,111.403
>>
>> /gluster/brick03/cinder-std-01.incremental.GETXATTR=25101,2796339,19.000,59253.000,111.403
>>
>> /gluster/brick03/cinder-std-01.cumulative.REMOVEXATTR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.incremental.REMOVEXATTR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.OPENDIR=15586,1103872,1.000,275.000,70.825
>>
>> /gluster/brick03/cinder-std-01.incremental.OPENDIR=15586,1103872,1.000,275.000,70.825
>> /gluster/brick03/cinder-std-01.cumulative.FSYNCDIR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FSYNCDIR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.ACCESS=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.ACCESS=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.CREATE=874,50472530,232.000,207796.000,57748.890
>>
>> /gluster/brick03/cinder-std-01.incremental.CREATE=874,50472530,232.000,207796.000,57748.890
>> /gluster/brick03/cinder-std-01.cumulative.FTRUNCATE=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FTRUNCATE=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.FSTAT=17,2177,79.000,171.000,128.059
>>
>> /gluster/brick03/cinder-std-01.incremental.FSTAT=17,2177,79.000,171.000,128.059
>> /gluster/brick03/cinder-std-01.cumulative.LK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.LK=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.LOOKUP=125661,33305278,11.000,85401.000,265.041
>>
>> /gluster/brick03/cinder-std-01.incremental.LOOKUP=125661,33305278,11.000,85401.000,265.041
>>
>> /gluster/brick03/cinder-std-01.cumulative.READDIR=9962,733765,11.000,22711.000,73.656
>>
>> /gluster/brick03/cinder-std-01.incremental.READDIR=9962,733765,11.000,22711.000,73.656
>>
>> /gluster/brick03/cinder-std-01.cumulative.INODELK=84548282,8422980210,10.000,3541049.000,99.623
>>
>> /gluster/brick03/cinder-std-01.incremental.INODELK=84548282,8422980210,10.000,3541049.000,99.623
>>
>> /gluster/brick03/cinder-std-01.cumulative.FINODELK=160805,38053301587,10.000,160453695.000,236642.527
>>
>> /gluster/brick03/cinder-std-01.incremental.FINODELK=160805,38053301587,10.000,160453695.000,236642.527
>>
>> /gluster/brick03/cinder-std-01.cumulative.ENTRYLK=13410,926496,12.000,202.000,69.090
>>
>> /gluster/brick03/cinder-std-01.incremental.ENTRYLK=13410,926496,12.000,202.000,69.090
>> /gluster/brick03/cinder-std-01.cumulative.FENTRYLK=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FENTRYLK=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.XATTROP=3719,655734,42.000,17867.000,176.320
>>
>> /gluster/brick03/cinder-std-01.incremental.XATTROP=3719,655734,42.000,17867.000,176.320
>>
>> /gluster/brick03/cinder-std-01.cumulative.FXATTROP=361596,56177842,27.000,142536.000,155.361
>>
>> /gluster/brick03/cinder-std-01.incremental.FXATTROP=361596,56177842,27.000,142536.000,155.361
>> /gluster/brick03/cinder-std-01.cumulative.FGETXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FGETXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.FSETXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FSETXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.RCHECKSUM=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.RCHECKSUM=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.SETATTR=23,3450,93.000,178.000,150.000
>>
>> /gluster/brick03/cinder-std-01.incremental.SETATTR=23,3450,93.000,178.000,150.000
>> /gluster/brick03/cinder-std-01.cumulative.FSETATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FSETATTR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.READDIRP=39,421236,40.000,97679.000,10800.923
>>
>> /gluster/brick03/cinder-std-01.incremental.READDIRP=39,421236,40.000,97679.000,10800.923
>> /gluster/brick03/cinder-std-01.cumulative.FORGET=874,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FORGET=874,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.RELEASE=9359,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.incremental.RELEASE=9359,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.RELEASEDIR=15586,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.incremental.RELEASEDIR=15586,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.GETSPEC=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.GETSPEC=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.cumulative.FREMOVEXATTR=0,0,0.000,0.000,0.000
>>
>> /gluster/brick03/cinder-std-01.incremental.FREMOVEXATTR=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.FALLOCATE=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.FALLOCATE=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.DISCARD=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.DISCARD=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.cumulative.ZEROFILL=0,0,0.000,0.000,0.000
>> /gluster/brick03/cinder-std-01.incremental.ZEROFILL=0,0,0.000,0.000,0.000
>>
>> [features/quota.gv-cinder-quota - Memory usage]
>> num_types=128
>>
>> [features/quota.gv-cinder-quota - usage-type gf_common_mt_asprintf
>> memusage]
>> size=30
>> num_allocs=1
>> max_size=124
>> max_num_allocs=2
>> total_allocs=6
>>
>> [features/quota.gv-cinder-quota - usage-type gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=133
>> max_num_allocs=1
>> total_allocs=5
>>
>> [features/quota.gv-cinder-quota - usage-type gf_common_mt_mem_pool
>> memusage]
>> size=120
>> num_allocs=1
>> max_size=120
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/quota.gv-cinder-quota - usage-type gf_common_mt_long memusage]
>> size=26368
>> num_allocs=1
>> max_size=26368
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/quota.gv-cinder-quota - usage-type gf_quota_mt_quota_priv_t
>> memusage]
>> size=96
>> num_allocs=1
>> max_size=96
>> max_num_allocs=1
>> total_allocs=1
>>
>> [xlators.features.quota.priv]
>> soft-timeout=60
>> hard-timeout=5
>> alert-time=86400
>> quota-on=0
>> statfs=0
>> volume-uuid=gv-cinder
>> validation-count=0
>>
>> [features/marker.gv-cinder-marker - Memory usage]
>> num_types=124
>>
>> [features/marker.gv-cinder-marker - usage-type gf_common_mt_asprintf
>> memusage]
>> size=32
>> num_allocs=1
>> max_size=115
>> max_num_allocs=2
>> total_allocs=1745
>>
>> [features/marker.gv-cinder-marker - usage-type gf_common_mt_strdup
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=42
>> max_num_allocs=2
>> total_allocs=25101
>>
>> [features/marker.gv-cinder-marker - usage-type gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=99
>> max_num_allocs=1
>> total_allocs=1744
>>
>> [features/marker.gv-cinder-marker - usage-type gf_common_mt_mem_pool
>> memusage]
>> size=120
>> num_allocs=1
>> max_size=120
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/marker.gv-cinder-marker - usage-type gf_common_mt_long memusage]
>> size=42496
>> num_allocs=1
>> max_size=42496
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/marker.gv-cinder-marker - usage-type gf_marker_mt_marker_conf_t
>> memusage]
>> size=80
>> num_allocs=1
>> max_size=80
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/index.gv-cinder-index - Memory usage]
>> num_types=120
>>
>> [features/index.gv-cinder-index - usage-type gf_common_mt_gf_dirent_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=3826
>> max_num_allocs=19
>> total_allocs=25757
>>
>> [features/index.gv-cinder-index - usage-type gf_common_mt_strdup memusage]
>> size=0
>> num_allocs=0
>> max_size=162
>> max_num_allocs=3
>> total_allocs=21020
>>
>> [features/index.gv-cinder-index - usage-type gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=29
>> max_num_allocs=1
>> total_allocs=4981
>>
>> [features/index.gv-cinder-index - usage-type gf_index_mt_priv_t memusage]
>> size=152
>> num_allocs=1
>> max_size=152
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/index.gv-cinder-index - usage-type gf_index_inode_ctx_t
>> memusage]
>> size=160
>> num_allocs=5
>> max_size=192
>> max_num_allocs=6
>> total_allocs=791
>>
>> [features/index.gv-cinder-index - usage-type gf_index_fd_ctx_t memusage]
>> size=0
>> num_allocs=0
>> max_size=32
>> max_num_allocs=2
>> total_allocs=4981
>>
>> [features/barrier.gv-cinder-barrier - Memory usage]
>> num_types=118
>>
>> [features/barrier.gv-cinder-barrier - usage-type gf_common_mt_asprintf
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=96
>> max_num_allocs=1
>> total_allocs=4
>>
>> [features/barrier.gv-cinder-barrier - usage-type gf_common_mt_char
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=138
>> max_num_allocs=1
>> total_allocs=4
>>
>> [features/barrier.gv-cinder-barrier - usage-type gf_barrier_mt_priv_t
>> memusage]
>> size=56
>> num_allocs=1
>> max_size=56
>> max_num_allocs=1
>> total_allocs=1
>>
>> [xlator.features.barrier.priv]
>> barrier.enabled=0
>> barrier.timeout=120
>>
>> [performance/io-threads.gv-cinder-io-threads - Memory usage]
>> num_types=118
>>
>> [performance/io-threads.gv-cinder-io-threads - usage-type
>> gf_common_mt_iovec memusage]
>> size=0
>> num_allocs=0
>> max_size=208
>> max_num_allocs=13
>> total_allocs=303078
>>
>> [performance/io-threads.gv-cinder-io-threads - usage-type
>> gf_common_mt_asprintf memusage]
>> size=0
>> num_allocs=0
>> max_size=176
>> max_num_allocs=2
>> total_allocs=410217
>>
>> [performance/io-threads.gv-cinder-io-threads - usage-type
>> gf_common_mt_strdup memusage]
>> size=0
>> num_allocs=0
>> max_size=212
>> max_num_allocs=8
>> total_allocs=169882954
>>
>> [performance/io-threads.gv-cinder-io-threads - usage-type
>> gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=238
>> max_num_allocs=2
>> total_allocs=410217
>>
>> [performance/io-threads.gv-cinder-io-threads - usage-type
>> gf_iot_mt_iot_conf_t memusage]
>> size=376
>> num_allocs=1
>> max_size=376
>> max_num_allocs=1
>> total_allocs=1
>>
>> [performance/io-threads.gv-cinder-io-threads]
>> maximum_threads_count=16
>> current_threads_count=1
>> sleep_count=1
>> idle_time=120
>> stack_size=1048576
>> high_priority_threads=16
>> normal_priority_threads=16
>> low_priority_threads=16
>> least_priority_threads=1
>> cached least rate=24
>> least rate limit=0
>>
>> [features/locks.gv-cinder-locks - Memory usage]
>> num_types=126
>>
>> [features/locks.gv-cinder-locks - usage-type gf_common_mt_asprintf
>> memusage]
>> size=27
>> num_allocs=1
>> max_size=145
>> max_num_allocs=7
>> total_allocs=271430
>>
>> [features/locks.gv-cinder-locks - usage-type gf_common_mt_strdup memusage]
>> size=286
>> num_allocs=10
>> max_size=4023
>> max_num_allocs=66
>> total_allocs=84845125
>>
>> [features/locks.gv-cinder-locks - usage-type gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=201
>> max_num_allocs=6
>> total_allocs=251416
>>
>> [features/locks.gv-cinder-locks - usage-type gf_common_mt_mem_pool
>> memusage]
>> size=120
>> num_allocs=1
>> max_size=120
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/locks.gv-cinder-locks - usage-type gf_common_mt_long memusage]
>> size=4736
>> num_allocs=1
>> max_size=4736
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_pl_dom_list_t
>> memusage]
>> size=880
>> num_allocs=10
>> max_size=880
>> max_num_allocs=10
>> total_allocs=10
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_pl_inode_t
>> memusage]
>> size=2112
>> num_allocs=12
>> max_size=2288
>> max_num_allocs=13
>> total_allocs=886
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_posix_lock_t
>> memusage]
>> size=504
>> num_allocs=7
>> max_size=576
>> max_num_allocs=8
>> total_allocs=25
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_pl_entry_lock_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=3552
>> max_num_allocs=3
>> total_allocs=13410
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_pl_inode_lock_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=125440
>> max_num_allocs=56
>> total_allocs=84709087
>>
>> [features/locks.gv-cinder-locks - usage-type
>> gf_locks_mt_posix_locks_private_t memusage]
>> size=16
>> num_allocs=1
>> max_size=16
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/locks.gv-cinder-locks - usage-type gf_locks_mt_pl_fdctx_t
>> memusage]
>> size=48
>> num_allocs=3
>> max_size=96
>> max_num_allocs=6
>> total_allocs=19966
>>
>> [features/access-control.gv-cinder-access-control - Memory usage]
>> num_types=121
>>
>> [features/access-control.gv-cinder-access-control - usage-type
>> gf_common_mt_asprintf memusage]
>> size=0
>> num_allocs=0
>> max_size=20
>> max_num_allocs=10
>> total_allocs=238292
>>
>> [features/access-control.gv-cinder-access-control - usage-type
>> gf_common_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=245
>> max_num_allocs=10
>> total_allocs=237822
>>
>> [features/access-control.gv-cinder-access-control - usage-type
>> gf_posix_acl_mt_ctx_t memusage]
>> size=992
>> num_allocs=31
>> max_size=1024
>> max_num_allocs=32
>> total_allocs=921
>>
>> [features/access-control.gv-cinder-access-control - usage-type
>> gf_posix_acl_mt_posix_ace_t memusage]
>> size=32
>> num_allocs=1
>> max_size=32
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/access-control.gv-cinder-access-control - usage-type
>> gf_posix_acl_mt_conf_t memusage]
>> size=16
>> num_allocs=1
>> max_size=16
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/changelog.gv-cinder-changelog - Memory usage]
>> num_types=127
>>
>> [features/changelog.gv-cinder-changelog - usage-type
>> gf_common_mt_asprintf memusage]
>> size=38
>> num_allocs=1
>> max_size=136
>> max_num_allocs=2
>> total_allocs=7
>>
>> [features/changelog.gv-cinder-changelog - usage-type gf_common_mt_strdup
>> memusage]
>> size=84
>> num_allocs=2
>> max_size=84
>> max_num_allocs=2
>> total_allocs=6
>>
>> [features/changelog.gv-cinder-changelog - usage-type gf_common_mt_char
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=186
>> max_num_allocs=1
>> total_allocs=6
>>
>> [features/changelog.gv-cinder-changelog - usage-type
>> gf_common_mt_mem_pool memusage]
>> size=120
>> num_allocs=1
>> max_size=120
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/changelog.gv-cinder-changelog - usage-type gf_common_mt_long
>> memusage]
>> size=7424
>> num_allocs=1
>> max_size=7424
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/changelog.gv-cinder-changelog - usage-type
>> gf_changelog_mt_priv_t memusage]
>> size=760
>> num_allocs=1
>> max_size=760
>> max_num_allocs=1
>> total_allocs=1
>>
>> [features/changelog.gv-cinder-changelog - usage-type gf_changelog_mt_rt_t
>> memusage]
>> size=4
>> num_allocs=1
>> max_size=4
>> max_num_allocs=1
>> total_allocs=1
>>
>> [storage/posix.gv-cinder-posix - Memory usage]
>> num_types=125
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_gf_dirent_t
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=4055
>> max_num_allocs=19
>> total_allocs=494
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_inode_ctx
>> memusage]
>> size=7488
>> num_allocs=26
>> max_size=7776
>> max_num_allocs=27
>> total_allocs=39
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_iobref memusage]
>> size=0
>> num_allocs=0
>> max_size=144
>> max_num_allocs=6
>> total_allocs=48833565
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_asprintf
>> memusage]
>> size=0
>> num_allocs=0
>> max_size=95
>> max_num_allocs=7
>> total_allocs=201906
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_strdup memusage]
>> size=31
>> num_allocs=1
>> max_size=166
>> max_num_allocs=2
>> total_allocs=5110
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_char memusage]
>> size=256
>> num_allocs=1
>> max_size=691
>> max_num_allocs=16
>> total_allocs=522677
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_common_mt_iobrefs memusage]
>> size=0
>> num_allocs=0
>> max_size=768
>> max_num_allocs=6
>> total_allocs=48833565
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_posix_mt_posix_fd memusage]
>> size=96
>> num_allocs=2
>> max_size=240
>> max_num_allocs=5
>> total_allocs=22484
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_posix_mt_char memusage]
>> size=0
>> num_allocs=0
>> max_size=345
>> max_num_allocs=15
>> total_allocs=1011303
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_posix_mt_posix_private
>> memusage]
>> size=608
>> num_allocs=1
>> max_size=608
>> max_num_allocs=1
>> total_allocs=1
>>
>> [storage/posix.gv-cinder-posix - usage-type gf_posix_mt_trash_path
>> memusage]
>> size=51
>> num_allocs=1
>> max_size=51
>> max_num_allocs=1
>> total_allocs=1
>>
>> [storage/posix.gv-cinder-posix]
>> base_path=/gluster/brick03/cinder-std-01
>> base_path_length=30
>> max_read=-220679111
>> max_write=71495680
>> nr_files=-2516
>>
>> DUMP-END-TIME: 2016-02-13 02:42:44.674718
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>



-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160216/5974ba95/attachment-0001.html>


More information about the Gluster-devel mailing list