[Gluster-users] NFS problem
Christopher Anderlik
christopher.anderlik at xidras.com
Fri Jun 10 08:29:26 UTC 2011
hello.
we had core-dump-files.
I did following:
gdb /opt/glusterfs/3.2.0/sbin/glusterfsd --core core.12748 --batch --quiet -ex "thread apply all bt
full" -ex "quit" > gfs-server-crash-full-stacktrace-1.txt
and here is the output - does this help??
cat gfs-server-crash-full-stacktrace-1.txt
[New Thread 13173]
[New Thread 13069]
[New Thread 13064]
[New Thread 12757]
[New Thread 13100]
[New Thread 12751]
[New Thread 12749]
[New Thread 12748]
Core was generated by `/opt/glusterfs/3.2.0/sbin/glusterfsd --xlator-option community-server.listen-po'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f48dd2b2aa8 in marker_setattr_cbk (frame=0x7f48df4f64ac, cookie=0x7f48df4f5f00,
this=0x7f48d800aaf0, op_ret=-1, op_errno=1, statpre=0x0, statpost=0x0) at marker.c:1590
in marker.c
Thread 9 (Thread 12748):
#0 0x00007f48dffd3728 in epoll_wait () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f48e0af83dc in event_dispatch_epoll (event_pool=0x62e320) at event.c:839
events = (struct epoll_event *) 0x7f48d80008c0
size = 1
i = 1
ret = 0
__FUNCTION__ = "event_dispatch_epoll"
#2 0x00007f48e0af879c in event_dispatch (event_pool=0x62e320) at event.c:956
ret = -1
__FUNCTION__ = "event_dispatch"
#3 0x00000000004072a1 in main (argc=17, argv=0x7fff81f3d708) at glusterfsd.c:1476
ctx = (glusterfs_ctx_t *) 0x62e010
ret = 0
__FUNCTION__ = "main"
Thread 8 (Thread 12749):
#0 0x00007f48e0267767 in do_sigwait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48e026780d in sigwait () from /lib64/libpthread.so.0
No symbol table info available.
#2 0x0000000000406b52 in glusterfs_sigwaiter (arg=0x7fff81f3d530) at glusterfsd.c:1229
set = {__val = {139950974949891, 139950986281881, 5, 0, 0, 0, 139950971458432,
139950957934928, 0, 139950986306178, 139950957934928, 139950974959615, 18446744073709551520, 24,
139950957933008,
139950957934928}}
ret = 0
sig = 0
#3 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#5 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 7 (Thread 12751):
#0 0x00007f48dffa1cb1 in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f48dffcce64 in usleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f48e0ae04e8 in gf_timer_proc (ctx=0x62e010) at timer.c:181
now = 1307629931130597
now_tv = {tv_sec = 1307629931, tv_usec = 130597}
event = (gf_timer_t *) 0x644d50
reg = (gf_timer_registry_t *) 0x631d20
__FUNCTION__ = "gf_timer_proc"
#3 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#5 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 6 (Thread 13100):
#0 0x00007f48e0263fdd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48dd4c74e3 in iot_worker (data=0x7f48d8010630) at io-threads.c:101
conf = (iot_conf_t *) 0x7f48d8010630
this = (xlator_t *) 0x7f48d8009a50
stub = (call_stub_t *) 0x7f48df22aed4
sleep_till = {tv_sec = 1307630051, tv_nsec = 0}
ret = 0
timeout = 0 '\0'
bye = 0 '\0'
__FUNCTION__ = "iot_worker"
#2 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#4 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 5 (Thread 12757):
#0 0x00007f48e0263fdd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48ddb0bc32 in janitor_get_next_fd (this=0x7f48d8006690) at posix.c:1219
priv = (struct posix_private *) 0x7f48d8010840
pfd = (struct posix_fd *) 0x0
timeout = {tv_sec = 1307630531, tv_nsec = 0}
#2 0x00007f48ddb0bda9 in posix_janitor_thread_proc (data=0x7f48d8006690) at posix.c:1265
this = (xlator_t *) 0x7f48d8006690
priv = (struct posix_private *) 0x7f48d8010840
pfd = (struct posix_fd *) 0x7f48d803c340
now = 1307629931
__FUNCTION__ = "posix_janitor_thread_proc"
#3 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#5 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 4 (Thread 13064):
#0 0x00007f48e0263fdd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48dd4c74e3 in iot_worker (data=0x7f48d8010630) at io-threads.c:101
conf = (iot_conf_t *) 0x7f48d8010630
this = (xlator_t *) 0x7f48d8009a50
stub = (call_stub_t *) 0x7f48df22aed4
sleep_till = {tv_sec = 1307630051, tv_nsec = 0}
ret = 0
timeout = 0 '\0'
bye = 0 '\0'
__FUNCTION__ = "iot_worker"
#2 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#4 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 3 (Thread 13069):
#0 0x00007f48e0263fdd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48dd4c74e3 in iot_worker (data=0x7f48d8010630) at io-threads.c:101
conf = (iot_conf_t *) 0x7f48d8010630
this = (xlator_t *) 0x7f48d8009a50
stub = (call_stub_t *) 0x7f48df22aed4
sleep_till = {tv_sec = 1307630051, tv_nsec = 0}
ret = 0
timeout = 0 '\0'
bye = 0 '\0'
__FUNCTION__ = "iot_worker"
#2 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#4 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 2 (Thread 13173):
#0 0x00007f48e0263fdd in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f48dd4c74e3 in iot_worker (data=0x7f48d8010630) at io-threads.c:101
conf = (iot_conf_t *) 0x7f48d8010630
this = (xlator_t *) 0x7f48d8009a50
stub = (call_stub_t *) 0x7f48df22aed4
sleep_till = {tv_sec = 1307630051, tv_nsec = 0}
ret = 0
timeout = 0 '\0'
bye = 0 '\0'
__FUNCTION__ = "iot_worker"
#2 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#4 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 1 (Thread 12756):
#0 0x00007f48dd2b2aa8 in marker_setattr_cbk (frame=0x7f48df4f64ac, cookie=0x7f48df4f5f00,
this=0x7f48d800aaf0, op_ret=-1, op_errno=1, statpre=0x0, statpost=0x0) at marker.c:1590
local = (marker_local_t *) 0x0
priv = (marker_conf_t *) 0x0
__FUNCTION__ = "marker_setattr_cbk"
#1 0x00007f48dd4c7d66 in iot_setattr_cbk (frame=0x7f48df4f5f00, cookie=0x7f48df4f3644,
this=0x7f48d8009a50, op_ret=-1, op_errno=1, preop=0x0, postop=0x0) at io-threads.c:251
fn = (fop_setattr_cbk_t) 0x7f48dd2b2a1e <marker_setattr_cbk>
_parent = (call_frame_t *) 0x7f48df4f64ac
old_THIS = (xlator_t *) 0x7f48d8009a50
__FUNCTION__ = "iot_setattr_cbk"
#2 0x00007f48e0ad21aa in default_setattr_cbk (frame=0x7f48df4f3644, cookie=0x7f48df4fa184,
this=0x7f48d8008a10, op_ret=-1, op_errno=1, statpre=0x0, statpost=0x0) at defaults.c:405
fn = (fop_setattr_cbk_t) 0x7f48dd4c7c3b <iot_setattr_cbk>
_parent = (call_frame_t *) 0x7f48df4f5f00
old_THIS = (xlator_t *) 0x7f48d8008a10
__FUNCTION__ = "default_setattr_cbk"
#3 0x00007f48dd8fe6d7 in ac_setattr_stat_cbk (frame=0x7f48df4fa184, cookie=0x7f48df4f542c,
this=0x7f48d80078a0, op_ret=-1, op_errno=1, buf=0x7f48dcbafe90) at access-control.c:1843
fn = (fop_setattr_cbk_t) 0x7f48e0ad207f <default_setattr_cbk>
_parent = (call_frame_t *) 0x7f48df4f3644
old_THIS = (xlator_t *) 0x7f48d80078a0
stub = (call_stub_t *) 0x7f48df22a240
valid = 0
setbuf = (struct iatt *) 0x0
__FUNCTION__ = "ac_setattr_stat_cbk"
#4 0x00007f48ddb087d9 in posix_stat (frame=0x7f48df4f542c, this=0x7f48d8006690, loc=0x7f48df22af0c)
at posix.c:518
fn = (fop_stat_cbk_t) 0x7f48dd8fe3b1 <ac_setattr_stat_cbk>
_parent = (call_frame_t *) 0x7f48df4fa184
old_THIS = (xlator_t *) 0x7f48d8006690
buf = {ia_ino = 390291665, ia_gfid = "\201°£T\223³D\000°]DßjTÅ\\", ia_dev = 2081, ia_type =
IA_IFDIR, ia_prot = {suid = 0 '\0', sgid = 0 '\0', sticky = 0 '\0', owner = {read = 1 '\001',
write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\0', exec = 1
'\001'}, other = {read = 1 '\001', write = 0 '\0', exec = 1 '\001'}}, ia_nlink = 3, ia_uid = 1005,
ia_gid = 100, ia_rdev = 0, ia_size = 4096, ia_blksize = 4096, ia_blocks = 16, ia_atime =
1306418332, ia_atime_nsec = 0, ia_mtime = 1306417107, ia_mtime_nsec = 0, ia_ctime = 1307430525,
ia_ctime_nsec = 0}
real_path = 0x7f48dcbafdd0 "/gluster-storage/community/flirty/rotlichkartei_base/static/images"
op_ret = 0
op_errno = 0
priv = (struct posix_private *) 0x7f48d8010840
__FUNCTION__ = "posix_stat"
#5 0x00007f48dd8fe958 in ac_setattr (frame=0x7f48df4fa184, this=0x7f48d80078a0, loc=0x7f48df22af0c,
buf=0x7f48df22af34, valid=48) at access-control.c:1870
_new = (call_frame_t *) 0x7f48df4f542c
old_THIS = (xlator_t *) 0x7f48d80078a0
tmp_cbk = (fop_stat_cbk_t) 0x7f48dd8fe3b1 <ac_setattr_stat_cbk>
stub = (call_stub_t *) 0x7f48df22a240
ret = -14
__FUNCTION__ = "ac_setattr"
#6 0x00007f48e0adb967 in default_setattr (frame=0x7f48df4f3644, this=0x7f48d8008a10,
loc=0x7f48df22af0c, stbuf=0x7f48df22af34, valid=48) at defaults.c:1131
_new = (call_frame_t *) 0x7f48df4fa184
old_THIS = (xlator_t *) 0x7f48d8008a10
tmp_cbk = (fop_setattr_cbk_t) 0x7f48e0ad207f <default_setattr_cbk>
__FUNCTION__ = "default_setattr"
#7 0x00007f48dd4c7f5e in iot_setattr_wrapper (frame=0x7f48df4f5f00, this=0x7f48d8009a50,
loc=0x7f48df22af0c, stbuf=0x7f48df22af34, valid=48) at io-threads.c:260
_new = (call_frame_t *) 0x7f48df4f3644
old_THIS = (xlator_t *) 0x7f48d8009a50
tmp_cbk = (fop_setattr_cbk_t) 0x7f48dd4c7c3b <iot_setattr_cbk>
__FUNCTION__ = "iot_setattr_wrapper"
#8 0x00007f48e0aee296 in call_resume_wind (stub=0x7f48df22aed4) at call-stub.c:2467
__FUNCTION__ = "call_resume_wind"
#9 0x00007f48e0af4020 in call_resume (stub=0x7f48df22aed4) at call-stub.c:3861
old_THIS = (xlator_t *) 0x7f48d8009a50
__FUNCTION__ = "call_resume"
#10 0x00007f48dd4c75c6 in iot_worker (data=0x7f48d8010630) at io-threads.c:129
conf = (iot_conf_t *) 0x7f48d8010630
this = (xlator_t *) 0x7f48d8009a50
stub = (call_stub_t *) 0x7f48df22aed4
sleep_till = {tv_sec = 1307630051, tv_nsec = 0}
ret = 0
timeout = 0 '\0'
bye = 0 '\0'
__FUNCTION__ = "iot_worker"
#11 0x00007f48e0260070 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#12 0x00007f48dffd313d in clone () from /lib64/libc.so.6
No symbol table info available.
#13 0x0000000000000000 in ?? ()
No symbol table info available.
Am 10.06.2011 09:23, schrieb Shehjar Tikoo:
> We'll need the crash stack trace also.
>
>
> Christopher Anderlik wrote:
>> here are our logs when nfs is crashing....
>>
>>
>>
>>
>>
>> [2011-06-10 08:54:14.900049] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> f8851fc2, ACCESS: NFS: 0(Call completed successfully.), POSIX: 0(Success)
>> [2011-06-10 08:54:14.902002] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> f9851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:14.902037] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:14.902062] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> f9851fc2, GETATTR: args: FH: hashcount 3, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 74c48fb3-d065-462b-83a9-e4558b042465
>> [2011-06-10 08:54:14.920579] D [afr-transaction.c:976:afr_post_nonblocking_inodelk_cbk]
>> 0-ksc-replicate-0: Non blocking inodelks done. Proceeding to FOP
>> [2011-06-10 08:54:14.921099] D [client-lk.c:442:delete_granted_locks_fd] 0-ksc-client-0: Number of
>> locks cleared=0
>> [2011-06-10 08:54:14.921155] D [client-lk.c:442:delete_granted_locks_fd] 0-ksc-client-1: Number of
>> locks cleared=0
>> [2011-06-10 08:54:14.932629] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> f9851fc2, GETATTR: NFS: 0(Call completed successfully.), POSIX: 0(Success)
>> [2011-06-10 08:54:14.932863] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> fa851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:14.932890] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:14.932907] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> fa851fc2, ACCESS: args: FH: hashcount 3, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 74c48fb3-d065-462b-83a9-e4558b042465
>> [2011-06-10 08:54:14.961700] D [socket.c:193:__socket_rwv] 0-ksc-client-0: EOF from peer
>> 10.0.1.198:24031
>> [2011-06-10 08:54:14.961741] W [socket.c:1494:__socket_proto_state_machine] 0-ksc-client-0:
>> reading from socket failed. Error (Transport endpoint is not connected), peer (10.0.1.198:24031)
>> [2011-06-10 08:54:14.961757] D [socket.c:1768:socket_event_handler] 0-transport: disconnecting now
>> [2011-06-10 08:54:14.961858] E [rpc-clnt.c:338:saved_frames_unwind]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_notify+0x158) [0x7f140e8c0acc]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x101) [0x7f140e8c006a]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(saved_frames_destroy+0x1c) [0x7f140e8bfb78])))
>> 0-ksc-client-0: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-10
>> 08:54:14.920644
>> [2011-06-10 08:54:14.961880] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-ksc-client-0:
>> remote operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.961880] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-ksc-client-0:
>> remote operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.961940] I [client.c:1883:client_rpc_notify] 0-ksc-client-0: disconnected
>> [2011-06-10 08:54:14.988468] W [socket.c:204:__socket_rwv] 0-ksc-client-1: readv failed
>> (Connection reset by peer)
>> [2011-06-10 08:54:14.988490] W [socket.c:1494:__socket_proto_state_machine] 0-ksc-client-1:
>> reading from socket failed. Error (Connection reset by peer), peer (10.0.1.199:24027)
>> [2011-06-10 08:54:14.988501] D [socket.c:1768:socket_event_handler] 0-transport: disconnecting now
>> [2011-06-10 08:54:14.988501] D [socket.c:1768:socket_event_handler] 0-transport: disconnecting now
>> [2011-06-10 08:54:14.988551] E [rpc-clnt.c:338:saved_frames_unwind]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_notify+0x158) [0x7f140e8c0acc]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x101) [0x7f140e8c006a]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(saved_frames_destroy+0x1c) [0x7f140e8bfb78])))
>> 0-ksc-client-1: forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-10
>> 08:54:14.920655
>> [2011-06-10 08:54:14.988568] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-ksc-client-1:
>> remote operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.988568] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-ksc-client-1:
>> remote operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.988599] D [client.c:77:client_submit_request] 0-ksc-client-0: connection in
>> disconnected state
>> [2011-06-10 08:54:14.988630] W [client3_1-fops.c:4379:client3_1_xattrop] 0-ksc-client-0: failed to
>> send the fop: Transport endpoint is not connected
>> [2011-06-10 08:54:14.988657] D [name.c:157:client_fill_address_family] 0-ksc-client-1:
>> address-family not specified, guessing it to be inet/inet6
>> [2011-06-10 08:54:14.991719] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning
>> ip-10.0.1.199 (port-24007) for hostname: 10.0.1.199 and port: 24007
>> [2011-06-10 08:54:14.991786] I [socket.c:2272:socket_submit_request] 0-ksc-client-1: not connected
>> (priv->connected = 0)
>> [2011-06-10 08:54:14.991803] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-ksc-client-1: failed to submit
>> rpc-request (XID: 0x70357x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport
>> (ksc-client-1)
>> [2011-06-10 08:54:14.991819] D [afr-lk-common.c:409:transaction_lk_op] 0-ksc-replicate-0: lk op is
>> for a transaction
>> [2011-06-10 08:54:14.991833] D [client.c:77:client_submit_request] 0-ksc-client-0: connection in
>> disconnected state
>> [2011-06-10 08:54:14.991845] W [client3_1-fops.c:4735:client3_1_inodelk] 0-ksc-client-0: failed to
>> send the fop: Transport endpoint is not connected
>> [2011-06-10 08:54:14.991862] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-ksc-client-1: failed to submit
>> rpc-request (XID: 0x70358x Program: GlusterFS 3.1, ProgVers: 310, Proc: 29) to rpc-transport
>> (ksc-client-1)
>> [2011-06-10 08:54:14.991876] I [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-ksc-client-1:
>> remote operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.991904] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> c71a69ff, CREATE: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 4, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid f3e21f99-0aed-4042-8b81-16a804497b5b
>> [2011-06-10 08:54:14.991945] D [client.c:129:client_submit_request] 0-ksc-client-1:
>> rpc_clnt_submit failed
>> [2011-06-10 08:54:14.991967] D [client.c:129:client_submit_request] 0-ksc-client-1:
>> rpc_clnt_submit failed
>> [2011-06-10 08:54:14.992014] E [rpc-clnt.c:338:saved_frames_unwind]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_notify+0x158) [0x7f140e8c0acc]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x101) [0x7f140e8c006a]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(saved_frames_destroy+0x1c) [0x7f140e8bfb78])))
>> 0-ksc-client-1: forced unwinding frame type(GlusterFS 3.1) op(RELEASE(41)) called at 2011-06-10
>> 08:54:14.921180
>> [2011-06-10 08:54:14.992046] E [rpc-clnt.c:338:saved_frames_unwind]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_notify+0x158) [0x7f140e8c0acc]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x101) [0x7f140e8c006a]
>> (-->/opt/glusterfs/3.2.0/lib64/libgfrpc.so.0(saved_frames_destroy+0x1c) [0x7f140e8bfb78])))
>> 0-ksc-client-1: forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-10
>> 08:54:14.932952
>> [2011-06-10 08:54:14.992060] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-ksc-client-1: remote
>> operation failed: Transport endpoint is not connected
>> [2011-06-10 08:54:14.992073] D [client.c:77:client_submit_request] 0-ksc-client-0: connection in
>> disconnected state
>> [2011-06-10 08:54:14.992091] W [client3_1-fops.c:2658:client3_1_stat] 0-ksc-client-0: failed to
>> send the fop Transport endpoint is not connected
>> [2011-06-10 08:54:14.992101] D [afr-inode-read.c:204:afr_stat_cbk] 0-ksc-replicate-0:
>> /pimp/htdocs/mountpoints.sh: all subvolumes tried, going out
>> [2011-06-10 08:54:14.992115] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> fa851fc2, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:14.992181] I [client.c:1883:client_rpc_notify] 0-ksc-client-1: disconnected
>> [2011-06-10 08:54:14.992197] E [afr-common.c:2546:afr_notify] 0-ksc-replicate-0: All subvolumes
>> are down. Going offline until atleast one of them comes back up.
>> [2011-06-10 08:54:15.5285] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> fb851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.5312] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found: NFS3
>> - GETATTR
>> [2011-06-10 08:54:15.5312] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found: NFS3
>> - GETATTR
>> [2011-06-10 08:54:15.5344] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> fb851fc2, GETATTR: args: FH: hashcount 3, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 74c48fb3-d065-462b-83a9-e4558b042465
>> [2011-06-10 08:54:15.5381] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /pimp/htdocs/mountpoints.sh: no child is up
>> [2011-06-10 08:54:15.5399] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID: fb851fc2,
>> GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.5783] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> fc851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.5810] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found: NFS3
>> - LOOKUP
>> [2011-06-10 08:54:15.5833] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> fc851fc2, LOOKUP: args: FH: hashcount 2, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> ff18fc56-c352-4caf-b98d-4eff71494acc, name: mountpoints.sh
>> [2011-06-10 08:54:15.5897] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.5954] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID: fc851fc2,
>> LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH: hashcount 0,
>> exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.11858] E [socket.c:1685:socket_connect_finish] 0-ksc-client-1: connection to
>> 10.0.1.199:24027 failed (Connection refused)
>> [2011-06-10 08:54:15.11886] D [socket.c:289:__socket_disconnect] 0-ksc-client-1: shutdown()
>> returned -1. Transport endpoint is not connected
>> [2011-06-10 08:54:15.11921] D [socket.c:193:__socket_rwv] 0-ksc-client-1: EOF from peer
>> 10.0.1.199:24027
>> [2011-06-10 08:54:15.11938] D [socket.c:1494:__socket_proto_state_machine] 0-ksc-client-1: reading
>> from socket failed. Error (Transport endpoint is not connected), peer (10.0.1.199:24027)
>> [2011-06-10 08:54:15.11954] D [socket.c:1768:socket_event_handler] 0-transport: disconnecting now
>> [2011-06-10 08:54:15.158995] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> e1f47a24, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.159038] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.159038] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.159065] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> e1f47a24, GETATTR: args: FH: hashcount 6, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 9b62d8e0-cf19-487e-b56a-9ac980294801
>> [2011-06-10 08:54:15.159105] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /vertrieb/htdocs/vertrieb_imrich/fetischtopliste.de/www/button.php: no child is up
>> [2011-06-10 08:54:15.159126] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> e1f47a24, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.159501] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> e2f47a24, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.159528] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.159550] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> e2f47a24, LOOKUP: args: FH: hashcount 5, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> dd60aa24-78fa-4509-abee-acf32f12542f, name: button.php
>> [2011-06-10 08:54:15.159604] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.159654] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> e2f47a24, LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 0, exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.160014] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> e3f47a24, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.160041] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.160080] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> e3f47a24, GETATTR: args: FH: hashcount 5, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> dd60aa24-78fa-4509-abee-acf32f12542f
>> [2011-06-10 08:54:15.160113] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /vertrieb/htdocs/vertrieb_imrich/fetischtopliste.de/www: no child is up
>> [2011-06-10 08:54:15.160130] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> e3f47a24, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.160532] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> e4f47a24, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.160554] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.160554] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.160569] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> e4f47a24, LOOKUP: args: FH: hashcount 4, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 434fc86c-5bed-4a38-b919-8822d38bd1a3, name: www
>> [2011-06-10 08:54:15.160611] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.160657] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> e4f47a24, LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 0, exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.192962] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> c81a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.193000] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.193019] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> c81a69ff, GETATTR: args: FH: hashcount 3, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 61273392-07cb-4427-bb48-2cf869031802
>> [2011-06-10 08:54:15.193050] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /tomorrowwinners.com/typo3temp/locks: no child is up
>> [2011-06-10 08:54:15.193069] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> c81a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.193526] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> c91a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.193547] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.193564] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> c91a69ff, LOOKUP: args: FH: hashcount 2, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 96012827-9101-40a1-b540-b6c47fe49e59, name: locks
>> [2011-06-10 08:54:15.193607] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.193654] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> c91a69ff, LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 0, exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.193987] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> ca1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.194015] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.194037] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> ca1a69ff, GETATTR: args: FH: hashcount 2, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 96012827-9101-40a1-b540-b6c47fe49e59
>> [2011-06-10 08:54:15.194070] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /tomorrowwinners.com/typo3temp: no child is up
>> [2011-06-10 08:54:15.194086] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> ca1a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.194566] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> cb1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.194587] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.194614] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> cb1a69ff, LOOKUP: args: FH: hashcount 1, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 7fecc27e-a8de-4fe5-adce-c5e00d35c7ac, name: typo3temp
>> [2011-06-10 08:54:15.194659] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.194703] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> cb1a69ff, LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 0, exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.396934] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> cc1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.396989] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.396989] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.397017] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> cc1a69ff, GETATTR: args: FH: hashcount 1, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 7fecc27e-a8de-4fe5-adce-c5e00d35c7ac
>> [2011-06-10 08:54:15.397064] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0:
>> /tomorrowwinners.com: no child is up
>> [2011-06-10 08:54:15.397087] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> cc1a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.397441] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> cd1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 3
>> [2011-06-10 08:54:15.397468] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - LOOKUP
>> [2011-06-10 08:54:15.397492] D [nfs3-helpers.c:2304:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID:
>> cd1a69ff, LOOKUP: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001, name: tomorrowwinners.com
>> [2011-06-10 08:54:15.397551] D [nfs3.c:1080:nfs3_fresh_lookup] 0-nfs-nfsv3: inode needs fresh lookup
>> [2011-06-10 08:54:15.397600] D [nfs3-helpers.c:2477:nfs3_log_newfh_res] 0-nfs-nfsv3: XID:
>> cd1a69ff, LOOKUP: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected), FH:
>> hashcount 0, exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
>> [2011-06-10 08:54:15.397953] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> ce1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:15.397981] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.398004] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> ce1a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.398038] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:15.398055] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> ce1a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.600726] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> cf1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.600769] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.600769] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.600789] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> cf1a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.600857] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> cf1a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.601525] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d01a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:15.601553] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.601576] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d01a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.601627] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:15.601646] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d01a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.725647] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> fd851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:15.725689] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.725689] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.725719] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> fd851fc2, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.725749] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:15.725769] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> fd851fc2, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.726209] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> fe851fc2, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:15.726236] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.726252] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> fe851fc2, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.726276] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:15.726292] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> fe851fc2, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.804546] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d11a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:15.804580] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.804580] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:15.804606] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d11a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.804658] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d11a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:15.805060] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d21a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:15.805088] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:15.805111] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d21a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:15.805143] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:15.805159] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d21a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.8639] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d31a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:16.8698] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found: NFS3
>> - GETATTR
>> [2011-06-10 08:54:16.20711] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d31a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.20821] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d31a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.21517] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d41a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:16.21556] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found: NFS3
>> - ACCESS
>> [2011-06-10 08:54:16.21575] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d41a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.21606] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:16.21625] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d41a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.225038] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d51a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:16.225096] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:16.225096] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:16.225118] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d51a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.225193] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d51a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.225595] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d61a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:16.225622] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:16.225639] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d61a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.225669] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:16.225688] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d61a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.243210] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d71a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:16.243232] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:16.243232] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:16.243249] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d71a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.243274] I [afr-inode-read.c:270:afr_stat] 0-ksc-replicate-0: /: no child is up
>> [2011-06-10 08:54:16.243291] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d71a69ff, ACCESS: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.243670] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d81a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:16.243697] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:16.243717] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d81a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.243756] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d81a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.428628] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> d91a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 1
>> [2011-06-10 08:54:16.428689] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:16.428689] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - GETATTR
>> [2011-06-10 08:54:16.428711] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> d91a69ff, GETATTR: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>> [2011-06-10 08:54:16.428787] D [nfs3-helpers.c:2424:nfs3_log_common_res] 0-nfs-nfsv3: XID:
>> d91a69ff, GETATTR: NFS: 5(I/O error), POSIX: 107(Transport endpoint is not connected)
>> [2011-06-10 08:54:16.429129] D [rpcsvc.c:1940:nfs_rpcsvc_request_create] 0-nfsrpc: RPC XID:
>> da1a69ff, Ver: 2, Program: 100003, ProgVers: 3, Proc: 4
>> [2011-06-10 08:54:16.429157] D [rpcsvc.c:1357:nfs_rpcsvc_program_actor] 0-nfsrpc: Actor found:
>> NFS3 - ACCESS
>> [2011-06-10 08:54:16.429182] D [nfs3-helpers.c:2292:nfs3_log_common_call] 0-nfs-nfsv3: XID:
>> da1a69ff, ACCESS: args: FH: hashcount 0, exportid ea50df7c-ff08-4416-8fb3-59d09667cc51, gfid
>> 00000000-0000-0000-0000-000000000001
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Am 09.06.2011 17:09, schrieb anthony garnier:
>>> Hi,
>>>
>>> I got the same problem as Juergen,
>>> My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0
>>>
>>> Volume Name: poolsave
>>> Type: Replicate
>>> Status: Started
>>> Number of Bricks: 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ylal2950:/soft/gluster-data
>>> Brick2: ylal2960:/soft/gluster-data
>>> Options Reconfigured:
>>> diagnostics.brick-log-level: DEBUG
>>> network.ping-timeout: 20
>>> performance.cache-size: 512MB
>>> nfs.port: 2049
>>>
>>> I'm running this command :
>>>
>>> I get those error :
>>> tar: ./uvs00: owner not changed
>>> tar: could not stat ./uvs00/log/0906uvsGESEC.log
>>> tar: ./uvs00: group not changed
>>> tar: could not stat ./uvs00/log/0306uvsGESEC.log
>>> tar: ./uvs00/log: Input/output error
>>> cannot change back?: Unknown error 526
>>> tar: ./uvs00/log: owner not changed
>>> tar: ./uvs00/log: group not changed
>>> tar: tape blocksize error
>>>
>>> And then I tried to "ls" in gluster mount :
>>> /bin/ls: .: Input/output error
>>>
>>> only way is to restart the volume
>>>
>>>
>>> Here is the logfile in Debug mod :
>>>
>>>
>>> Given volfile:
>>> +------------------------------------------------------------------------------+
>>> 1: volume poolsave-client-0
>>> 2: type protocol/client
>>> 3: option remote-host ylal2950
>>> 4: option remote-subvolume /soft/gluster-data
>>> 5: option transport-type tcp
>>> 6: option ping-timeout 20
>>> 7: end-volume
>>> 8:
>>> 9: volume poolsave-client-1
>>> 10: type protocol/client
>>> 11: option remote-host ylal2960
>>> 12: option remote-subvolume /soft/gluster-data
>>> 13: option transport-type tcp
>>> 14: option ping-timeout 20
>>> 15: end-volume
>>> 16:
>>> 17: volume poolsave-replicate-0
>>> 18: type cluster/replicate
>>> 19: subvolumes poolsave-client-0 poolsave-client-1
>>> 20: end-volume
>>> 21:
>>> 22: volume poolsave-write-behind
>>> 23: type performance/write-behind
>>> 24: subvolumes poolsave-replicate-0
>>> 25: end-volume
>>> 26:
>>> 27: volume poolsave-read-ahead
>>> 28: type performance/read-ahead
>>> 29: subvolumes poolsave-write-behind
>>> 30: end-volume
>>> 31:
>>> 32: volume poolsave-io-cache
>>> 33: type performance/io-cache
>>> 34: option cache-size 512MB
>>> 35: subvolumes poolsave-read-ahead
>>> 36: end-volume
>>> 37:
>>> 38: volume poolsave-quick-read
>>> 39: type performance/quick-read
>>> 40: option cache-size 512MB
>>> 41: subvolumes poolsave-io-cache
>>> 42: end-volume
>>> 43:
>>> 44: volume poolsave-stat-prefetch
>>> 45: type performance/stat-prefetch
>>> 46: subvolumes poolsave-quick-read
>>> 47: end-volume
>>> 48:
>>> 49: volume poolsave
>>> 50: type debug/io-stats
>>> 51: option latency-measurement off
>>> 52: option count-fop-hits off
>>> 53: subvolumes poolsave-stat-prefetch
>>> 54: end-volume
>>> 55:
>>> 56: volume nfs-server
>>> 57: type nfs/server
>>> 58: option nfs.dynamic-volumes on
>>> 59: option rpc-auth.addr.poolsave.allow *
>>> 60: option nfs3.poolsave.volume-id 71e0dabf-4620-4b6d-b138-3266096b93b6
>>> 61: option nfs.port 2049
>>> 62: subvolumes poolsave
>>> 63: end-volume
>>>
>>> +------------------------------------------------------------------------------+
>>> [2011-06-09 16:52:23.709018] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-0: changing
>>> port to 24014 (from 0)
>>> [2011-06-09 16:52:23.709211] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-poolsave-client-1: changing
>>> port to 24011 (from 0)
>>> [2011-06-09 16:52:27.716417] I [client-handshake.c:1080:select_server_supported_programs]
>>> 0-poolsave-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
>>> [2011-06-09 16:52:27.716650] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-0:
>>> Connected to 10.68.217.85:24014, attached to remote volume '/soft/gluster-data'.
>>> [2011-06-09 16:52:27.716679] I [afr-common.c:2514:afr_notify] 0-poolsave-replicate-0: Subvolume
>>> 'poolsave-client-0' came back up; going online.
>>> [2011-06-09 16:52:27.717020] I [afr-common.c:836:afr_fresh_lookup_cbk] 0-poolsave-replicate-0: added
>>> root inode
>>> [2011-06-09 16:52:27.729719] I [client-handshake.c:1080:select_server_supported_programs]
>>> 0-poolsave-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
>>> [2011-06-09 16:52:27.730014] I [client-handshake.c:913:client_setvolume_cbk] 0-poolsave-client-1:
>>> Connected to 10.68.217.86:24011, attached to remote volume '/soft/gluster-data'.
>>> [2011-06-09 17:01:35.537084] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2)
>>> [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.546601] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.569755] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.569881] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.579674] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) [0x2aaaab3b88fc]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151) [0x2aaaab2948e1]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2)
>>> [0x2aaaab1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.587907] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.612918] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.645357] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.660873] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.660955] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.665933] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.666057] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.671199] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.671241] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1:
>>> remote operation failed: Directory not empty
>>> [2011-06-09 17:01:35.680959] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.715633] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.732798] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-0:
>>> remote operation failed: Permission denied
>>> [2011-06-09 17:01:35.733044] I [client3_1-fops.c:547:client3_1_rmdir_cbk] 0-poolsave-client-1:
>>> remote operation failed: Permission denied
>>> [2011-06-09 17:01:35.750009] W [stat-prefetch.c:178:sp_check_and_create_inode_ctx]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) [0x2aaaab3b95bb]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165) [0x2aaaab294ad5]
>>> (-->/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_create+0xbc)
>>> [0x2aaaab185c9c]))) 0-poolsave-stat-prefetch: stat-prefetch context is present in inode (ino:0
>>> gfid:00000000-0000-0000-0000-000000000000) when it is supposed to be not present
>>> [2011-06-09 17:01:35.784610] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-0:
>>> reading from socket failed. Error (Transport endpoint is not connected), peer (10.68.217.85:24014)
>>> [2011-06-09 17:01:35.784745] E [rpc-clnt.c:338:saved_frames_unwind]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e]
>>> (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0:
>>> forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752080
>>> [2011-06-09 17:01:35.784770] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-0:
>>> remote operation failed: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.784811] E [rpc-clnt.c:338:saved_frames_unwind]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e]
>>> (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0:
>>> forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.752414
>>> [2011-06-09 17:01:35.784828] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-0: remote
>>> operation failed: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.784875] I [client.c:1883:client_rpc_notify] 0-poolsave-client-0: disconnected
>>> [2011-06-09 17:01:35.785400] W [socket.c:204:__socket_rwv] 0-poolsave-client-1: readv failed
>>> (Connection reset by peer)
>>> [2011-06-09 17:01:35.785435] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-1:
>>> reading from socket failed. Error (Connection reset by peer), peer (10.68.217.86:24011)
>>> [2011-06-09 17:01:35.785496] E [rpc-clnt.c:338:saved_frames_unwind]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e]
>>> (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1:
>>> forced unwinding frame type(GlusterFS 3.1) op(SETATTR(38)) called at 2011-06-09 17:01:35.752089
>>> [2011-06-09 17:01:35.785516] I [client3_1-fops.c:1640:client3_1_setattr_cbk] 0-poolsave-client-1:
>>> remote operation failed: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.785542] W [client3_1-fops.c:4379:client3_1_xattrop] 0-poolsave-client-0: failed
>>> to send the fop: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.817662] I [socket.c:2272:socket_submit_request] 0-poolsave-client-1: not
>>> connected (priv->connected = 0)
>>> [2011-06-09 17:01:35.817698] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to
>>> submit rpc-request (XID: 0x576x Program: GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport
>>> (poolsave-client-1)
>>> [2011-06-09 17:01:35.817721] W [client3_1-fops.c:4735:client3_1_inodelk] 0-poolsave-client-0: failed
>>> to send the fop: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.817744] W [rpc-clnt.c:1411:rpc_clnt_submit] 0-poolsave-client-1: failed to
>>> submit rpc-request (XID: 0x577x Program: GlusterFS 3.1, ProgVers: 310, Proc: 29) to rpc-transport
>>> (poolsave-client-1)
>>> [2011-06-09 17:01:35.817780] I [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-poolsave-client-1:
>>> remote operation failed: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.817897] E [rpc-clnt.c:338:saved_frames_unwind]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e]
>>> (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-1:
>>> forced unwinding frame type(GlusterFS 3.1) op(STAT(1)) called at 2011-06-09 17:01:35.784870
>>> [2011-06-09 17:01:35.817918] I [client3_1-fops.c:411:client3_1_stat_cbk] 0-poolsave-client-1: remote
>>> operation failed: Transport endpoint is not connected
>>> [2011-06-09 17:01:35.817969] I [client.c:1883:client_rpc_notify] 0-poolsave-client-1: disconnected
>>> [2011-06-09 17:01:35.817988] E [afr-common.c:2546:afr_notify] 0-poolsave-replicate-0: All subvolumes
>>> are down. Going offline until atleast one of them comes back up.
>>> [2011-06-09 17:01:35.818007] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-1: connection
>>> to 10.68.217.86:24011 failed (Connection refused)
>>> [2011-06-09 17:01:35.818606] I [afr.h:838:AFR_LOCAL_INIT] 0-poolsave-replicate-0: no subvolumes up
>>> [2011-06-09 17:01:35.819129] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log:
>>> no child is up
>>> [2011-06-09 17:01:35.819354] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00/log:
>>> no child is up
>>> [2011-06-09 17:01:35.820090] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /uvs00: no
>>> child is up
>>> [2011-06-09 17:01:35.820760] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.821212] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.821600] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.822123] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.822511] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.822975] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.823286] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.823583] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:35.823857] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:47.518006] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>> [2011-06-09 17:01:49.39204] E [socket.c:1685:socket_connect_finish] 0-poolsave-client-0: connection
>>> to 10.68.217.85:24014 failed (Connection refused)
>>> [2011-06-09 17:01:49.136932] I [afr-inode-read.c:270:afr_stat] 0-poolsave-replicate-0: /: no
>>> child is up
>>>
>>>
>>>
>>> > Message: 7
>>> > Date: Thu, 9 Jun 2011 12:56:39 +0530
>>> > From: Shehjar Tikoo <shehjart at gluster.com>
>>> > Subject: Re: [Gluster-users] Glusterfs 3.2.0 NFS Problem
>>> > To: J?rgen Winkler <juergen.winkler at xidras.com>
>>> > Cc: gluster-users at gluster.org
>>> > Message-ID: <4DF075AF.3040509 at gluster.com>
>>> > Content-Type: text/plain; charset="us-ascii"; format=flowed
>>> >
>>> > This can happen if all your servers were unreachable for a few seconds. The
>>> > situation must have rectified during the restart. We could confirm if you
>>> > change the log level on nfs to DEBUG and send us the log.
>>> >
>>> > Thanks
>>> > -Shehjar
>>> >
>>> > Ju"rgen Winkler wrote:
>>> > > Hi,
>>> > >
>>> > > i noticed a strange behavior with NFS and Glusterfs 3.2.0 , 3 of our
>>> > > Servers are loosing the Mount but when you restart the Volume on the
>>> > > Server it works again without a remount.
>>> > >
>>> > > On the server i noticed this entries in the Glusterfs/Nfs log-file when
>>> > > the mount on the Client becomes unavailable :
>>> > >
>>> > > [2011-06-08 14:37:02.568693] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:02.569212] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:02.611910] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:02.624477] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.288272] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.296150] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.309247] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.320939] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.321786] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.333609] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.334089] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.344662] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.352666] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.354195] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.360446] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.369331] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.471556] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:04.480013] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:05.639700] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:05.652535] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.578469] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.588949] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.590395] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.591414] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.591932] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.592596] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.639317] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:07.652919] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.332435] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.340622] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.349360] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.349550] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.360445] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.369497] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.369752] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.382097] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > > [2011-06-08 14:37:09.382387] I [afr-inode-read.c:270:afr_stat]
>>> > > 0-ksc-replicate-0: /: no child is up
>>> > >
>>> > >
>>> > > Thx for the help
>>> > >
>>> > > _______________________________________________
>>> > > Gluster-users mailing list
>>> > > Gluster-users at gluster.org
>>> > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>> >
>>> >
>>> >
>>> > ------------------------------
>>> >
>>> > _______________________________________________
>>> > Gluster-users mailing list
>>> > Gluster-users at gluster.org
>>> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>> >
>>> >
>>> > End of Gluster-users Digest, Vol 38, Issue 14
>>> > *********************************************
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
--
Mag. Christopher Anderlik
Leiter Technik
________________________________________________________________________________
Xidras GmbH
Stockern 47
3744 Stockern
Austria
Tel: 0043 2983 201 30 5 01
Fax: 0043 2983 201 30 5 01 9
Email: christopher.anderlik at xidras.com
Web: http://www.xidras.com
FN 317036 f | Landesgericht Krems | ATU64485024
________________________________________________________________________________
VERTRAULICHE INFORMATIONEN!
Diese eMail enthält vertrauliche Informationen und ist nur für den berechtigten
Empfänger bestimmt. Wenn diese eMail nicht für Sie bestimmt ist, bitten wir Sie,
diese eMail an uns zurückzusenden und anschließend auf Ihrem Computer und
Mail-Server zu löschen. Solche eMails und Anlagen dürfen Sie weder nutzen,
noch verarbeiten oder Dritten zugänglich machen, gleich in welcher Form.
Wir danken für Ihre Kooperation!
CONFIDENTIAL!
This email contains confidential information and is intended for the authorised
recipient only. If you are not an authorised recipient, please return the email
to us and then delete it from your computer and mail-server. You may neither
use nor edit any such emails including attachments, nor make them accessible
to third parties in any manner whatsoever.
Thank you for your cooperation
________________________________________________________________________________
More information about the Gluster-users
mailing list