[Gluster-devel] Glusterfs core report.

yang.bin18 at zte.com.cn yang.bin18 at zte.com.cn
Thu Apr 28 07:42:40 UTC 2016


Yeah,I have check about "
https://bugzilla.redhat.com/show_bug.cgi?id=892601".

But this issue has no any related detail information, so I can not 
distinguish .









----- Original Message -----
> From: "yang bin18" <yang.bin18 at zte.com.cn>
> To: "Vijay Bellur" <vbellur at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Friday, April 22, 2016 6:38:07 AM
> Subject: Re: [Gluster-devel] Glusterfs core report.
> 
> There is some clue I think will be useful,thanks.

Did you check about the glibc corruption issue I mentioned in my previous 
mail? This seems like the same issue.

> 
> (gdb) up
> #1 0x00007f308ca06098 in abort () from /lib64/libc.so.6
> (gdb) up
> #2 0x00007f308ca45197 in __libc_message () from /lib64/libc.so.6
> (gdb) up
> #3 0x00007f308ca4c56d in _int_free () from /lib64/libc.so.6
> (gdb) up
> #4 0x00007f308096ebc1 in dht_local_wipe (this=0x7f308f306460,
> local=0x7f3080042880) at dht-helper.c:475
> 475 iobref_unref (local->rebalance.iobref);
> (gdb) p local->rebalance.iobref
> $1 = (struct iobref *) 0x7f30740022c0
> (gdb) p *local->rebalance.iobref
> $2 = {lock = 1, ref = 2, iobrefs = 0x7f307402f240, alloced = 16, used = 
1}
> (gdb)
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks for your response.
> 
> We use glusterfs 3.6.7.
> 
> Sure ,We use Centos7.0.
> 
> Related log show bellow:
> 
> 143 [2016-04-13 06:33:54.236013] W [glusterfsd.c:1211:cleanup_and_exit] 
(-->
> 0-: received signum (15), shutting down
> 144 [2016-04-13 06:33:54.236081] I [fuse-bridge.c:5607:fini] 0-fuse:
> Unmounting '/var/lib/nova/mnt/a77401a594b06b2b56cc52ee61bb4 def'.
> 145 [2016-04-13 06:55:11.365558] I [MSGID: 100030] 
[glusterfsd.c:2035:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glu sterfs version 
3.6.7
> (args: /usr/sbin/glusterfs --volfile-server=10.74.125.254
> --volfile-id=/xitos-volume /var/lib/nova/mnt/
> a77401a594b06b2b56cc52ee61bb4def)
> 146 [2016-04-13 06:55:11.402762] I [dht-shared.c:337:dht_init_regex]
> 0-xitos-volume-dht: using regex rsync-hash-regex = ^\.(.+)\ .[^.]+$
> 147 [2016-04-13 06:55:11.404087] I [client.c:2268:notify]
> 0-xitos-volume-client-0: parent translators are ready, attempting conn 
ect
> on transport
> 148 Final graph:
> 149
> 
+------------------------------------------------------------------------------+
> 150 1: volume xitos-volume-client-0
> 151 2: type protocol/client
> 152 3: option ping-timeout 42
> 153 4: option remote-host 10.74.125.247
> 154 5: option remote-subvolume /mnt/dht/volume
> 155 6: option transport-type socket
> 156 7: option send-gids true
> 157 8: end-volume
> 158 9:
> 159 10: volume xitos-volume-dht
> 160 11: type cluster/distribute
> 161 12: subvolumes xitos-volume-client-0
> 162 13: end-volume
> 163 14:
> 164 15: volume xitos-volume-write-behind
> 165 16: type performance/write-behind
> 166 17: subvolumes xitos-volume-dht
> 167 18: end-volume
> 168 19:
> 169 20: volume xitos-volume-read-ahead
> 170 21: type performance/read-ahead
> 171 22: subvolumes xitos-volume-write-behind
> 172 23: end-volume
> 173 24:
> 174 25: volume xitos-volume-io-cache
> 175 26: type performance/io-cache
> 176 27: subvolumes xitos-volume-read-ahead
> 177 28: end-volume
> 178 29:
> 179 30: volume xitos-volume-quick-read
> 180 31: type performance/quick-read
> 181 32: subvolumes xitos-volume-io-cache
> 182 33: end-volume
> 183 34:
> 184 35: volume xitos-volume-open-behind
> 185 36: type performance/open-behind
> 186 37: subvolumes xitos-volume-quick-read
> 187 38: end-volume
> 188 39:
> 189 40: volume xitos-volume-md-cache
> 190 41: type performance/md-cache
> 191 42: subvolumes xitos-volume-open-behind
> 192 43: end-volume
> 193 44:
> 194 45: volume xitos-volume
> 195 46: type debug/io-stats
> 196 47: option latency-measurement off
> 197 48: option count-fop-hits off
> 198 49: subvolumes xitos-volume-md-cache
> 199 50: end-volume
> 200 51:
> 201 52: volume meta-autoload
> 202 53: type meta
> 203 54: subvolumes xitos-volume
> 204 55: end-volume
> 205 56:
> 206
> 
+------------------------------------------------------------------------------+
> 207 [2016-04-13 06:55:11.408891] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
> 0-xitos-volume-client-0: changing port to 49152 (from 0)
> 208 [2016-04-13 06:55:11.413254] I
> [client-handshake.c:1413:select_server_supported_programs]
> 0-xitos-volume-client-0: Using Pro gram GlusterFS 3.3, Num (1298437),
> Version (330)
> 209 [2016-04-13 06:55:11.413766] I
> [client-handshake.c:1200:client_setvolume_cbk] 0-xitos-volume-client-0:
> Connected to xitos-vo lume-client-0, attached to remote volume
> '/mnt/dht/volume'.
> 210 [2016-04-13 06:55:11.413795] I
> [client-handshake.c:1210:client_setvolume_cbk] 0-xitos-volume-client-0:
> Server and Client lk- version numbers are not same, reopening the fds
> 211 [2016-04-13 06:55:11.420494] I [fuse-bridge.c:5086:fuse_graph_setup]
> 0-fuse: switched to graph 0
> 212 [2016-04-13 06:55:11.420691] I
> [client-handshake.c:188:client_set_lk_version_cbk] 
0-xitos-volume-client-0:
> Server lk version = 1
> 213 [2016-04-13 06:55:11.420921] I [fuse-bridge.c:4015:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterf s 7.22 
kernel
> 7.22
> 214 pending frames:
> 215 frame : type(0) op(0)
> 216 frame : type(0) op(0)
> 217 frame : type(0) op(0)
> 218 frame : type(0) op(0)
> 219 frame : type(0) op(0)
> 220 frame : type(0) op(0)
> 221 frame : type(0) op(0)
> 222 patchset: git://git.gluster.com/glusterfs.git
> 223 signal received: 6
> 224 time of crash:
> 225 2016-04-13 08:15:37
> 226 configuration details:
> 227 argp 1
> 228 backtrace 1
> 229 dlfcn 1
> 230 libpthread 1
> 231 llistxattr 1
> 232 setfsid 1
> 233 spinlock 1
> 234 epoll.h 1
> 235 xattr.h 1
> 236 st_atim.tv_nsec 1
> 237 package-string: glusterfs 3.6.7
> 238 
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb2)[0x7f308d9e93d2]
> 239 /lib64/libglusterfs.so.0(gf_print_trace+0x32d)[0x7f308da008bd]
> 240 /lib64/libc.so.6(+0x35a00)[0x7f308ca04a00]
> 241 /lib64/libc.so.6(gsignal+0x39)[0x7f308ca04989]
> 242 /lib64/libc.so.6(abort+0x148)[0x7f308ca06098]
> 243 /lib64/libc.so.6(+0x76197)[0x7f308ca45197]
> 244 /lib64/libc.so.6(+0x7d56d)[0x7f308ca4c56d]
> 245
> 
/usr/lib64/glusterfs/3.6.7/xlator/cluster/distribute.so(dht_local_wipe+0x151)[0x7f308096ebc1]
> 246
> 
/usr/lib64/glusterfs/3.6.7/xlator/cluster/distribute.so(dht_writev_cbk+0x19d)[0x7f308099b9fd]
> 247
> 
/usr/lib64/glusterfs/3.6.7/xlator/protocol/client.so(client3_3_writev_cbk+0x672)[0x7f3080be4512]
> 248 /lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f308d7bd100]
> 249 /lib64/libgfrpc.so.0(rpc_clnt_notify+0x174)[0x7f308d7bd374]
> 250 /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f308d7b92c3]
> 251
> 
/usr/lib64/glusterfs/3.6.7/rpc-transport/socket.so(+0x87a0)[0x7f3082ac17a0]
> 252
> 
/usr/lib64/glusterfs/3.6.7/rpc-transport/socket.so(+0xaf94)[0x7f3082ac3f94]
> 253 /lib64/libglusterfs.so.0(+0x768c2)[0x7f308da3e8c2]
> 254 /usr/sbin/glusterfs(main+0x502)[0x7f308de92fe2]
> 255 /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f308c9f0af5]
> 256 /usr/sbin/glusterfs(+0x6381)[0x7f308de93381]
> 257 ---------
> 258 [2016-04-13 08:36:22.385866] I [MSGID: 100030] 
[glusterfsd.c:2035:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glu sterfs version 
3.6.7
> (args: /usr/sbin/glusterfs --volfile-server=10.74.125.254
> --volfile-id=/xitos-volume /var/lib/nova/mnt/
> a77401a594b06b2b56cc52ee61bb4def)
> 259 [2016-04-13 08:36:22.409856] I [dht-shared.c:337:dht_init_regex]
> 0-xitos-volume-dht: using regex rsync-hash-regex = ^\.(.+)\ .[^.]+$
> 260 [2016-04-13 08:36:22.411477] I [client.c:2268:notify]
> 0-xitos-volume-client-0: parent translators are ready, attempting conn 
ect
> on transport
> 261 Final graph:
> 262
> 
+------------------------------------------------------------------------------+
> 263 1: volume xitos-volume-client-0
> 264 2: type protocol/client
> 265 3: option ping-timeout 42
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Fri, Apr 15, 2016 at 2:43 AM, yang.bin18 at zte.com.cn <
> yang.bin18 at zte.com.cn > wrote:
> Glusterfs core when mounting. here is the backtree.
> 
> 
> Program terminated with signal 6, Aborted.
> #0 0x00007f308ca04989 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install 
glibc-2.17-55.el7.x86_64
> keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.11.3-49.el7.x86_64
> libcom_err-1.42.9-4.el7.x86_64 libgcc-4.8.2-16.el7.x86_64
> libselinux-2.2.2-6.el7.x86_64 openssl-libs-1.0.1e-34.el7.7.x86_64
> pcre-8.32-12.el7.x86_64 sssd-client-1.11.2-65.el7.x86_64
> xz-libs-5.1.2-8alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
> (gdb) bt
> #0 0x00007f308ca04989 in raise () from /lib64/libc.so.6
> #1 0x00007f308ca06098 in abort () from /lib64/libc.so.6
> #2 0x00007f308ca45197 in __libc_message () from /lib64/libc.so.6
> #3 0x00007f308ca4c56d in _int_free () from /lib64/libc.so.6
> #4 0x00007f308096ebc1 in dht_local_wipe (this=0x7f308f306460,
> local=0x7f3080042880) at dht-helper.c:475
> #5 0x00007f308099b9fd in dht_writev_cbk (frame=0x7f308bc11e5c,
> cookie=<optimized out>, this=<optimized out>, op_ret=131072,
> op_errno=0, prebuf=<optimized out>, postbuf=0x7fff2020c870, xdata=0x0) 
at
> dht-inode-write.c:84
> #6 0x00007f3080be4512 in client3_3_writev_cbk (req=<optimized out>,
> iov=<optimized out>, count=<optimized out>,
> myframe=0x7f308bc116f8) at client-rpc-fops.c:856
> #7 0x00007f308d7bd100 in rpc_clnt_handle_reply
> (clnt=clnt at entry=0x7f308f3292d0, pollin=pollin at entry=0x7f308f39bb10)
> at rpc-clnt.c:763
> #8 0x00007f308d7bd374 in rpc_clnt_notify (trans=<optimized out>,
> mydata=0x7f308f329300, event=<optimized out>,
> data=0x7f308f39bb10) at rpc-clnt.c:891
> #9 0x00007f308d7b92c3 in rpc_transport_notify
> (this=this at entry=0x7f308f35f3f0,
> event=event at entry=RPC_TRANSPORT_MSG_RECEIVED,
> data=data at entry=0x7f308f39bb10) at rpc-transport.c:516
> #10 0x00007f3082ac17a0 in socket_event_poll_in
> (this=this at entry=0x7f308f35f3f0) at socket.c:2234
> #11 0x00007f3082ac3f94 in socket_event_handler (fd=<optimized out>, 
idx=1,
> data=data at entry=0x7f308f35f3f0, poll_in=1, poll_out=0,
> poll_err=0) at socket.c:2347
> #12 0x00007f308da3e8c2 in event_dispatch_epoll_handler (i=<optimized 
out>,
> events=0x7f308f2febd0, event_pool=0x7f308f2b76c0)
> at event-epoll.c:384
> #13 event_dispatch_epoll (event_pool=0x7f308f2b76c0) at 
event-epoll.c:445
> #14 0x00007f308de92fe2 in main (argc=4, argv=0x7fff2020de78) at
> glusterfsd.c:2060
> 
> 
> Thanks for the report. What version of gluster is being used here? I 
assume
> this is on Centos 7.
> 
> Would it be possible to share the client log file?
> 
> Regards,
> Vijay
> 
> --------------------------------------------------------
> ZTE Information Security Notice: The information contained in this mail 
(and
> any attachment transmitted herewith) is privileged and confidential and 
is
> intended for the exclusive use of the addressee(s).  If you are not an
> intended recipient, any disclosure, reproduction, distribution or other
> dissemination or use of the information contained is strictly 
prohibited.
> If you have received this mail in error, please delete it and notify us
> immediately.
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160428/79ea41a4/attachment-0001.html>


More information about the Gluster-devel mailing list