[Gluster-users] volume start causes glusterd to core dump in 3.5.0

Matthew Rinella MRinella at apptio.com
Tue Apr 29 14:29:17 UTC 2014


It is indeed ext4.  I will give XFS a try.

Matthew Rinella
Sr. Systems Administrator

Apptio, Inc. | Technology Business Management
[http://media.apptio.com/signature/apptio-email-signature.jpg]<http://www.apptio.com/signature>

From: Carlos Capriotti [mailto:capriotti.carlos at gmail.com]
Sent: Monday, April 28, 2014 9:20 PM
To: Matthew Rinella
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] volume start causes glusterd to core dump in 3.5.0

Matthew, just out of curiosity, what is the underlying file system on your gluster bricks ?

Reason for my asking is that there is (or there was) an issue with EXT4. If you used it, you might want to reformat them using XFS -i 512.



On Tue, Apr 29, 2014 at 1:59 AM, Matthew Rinella <MRinella at apptio.com<mailto:MRinella at apptio.com>> wrote:

I just built a pair of AWS Red Hat 6.5 instances to create a gluster replicated pair file system.  I can install everything, peer probe, and create the volume, but as soon as I try to start the volume, glusterd dumps core.

The tail of the log after the crash:

+------------------------------------------------------------------------------+
[2014-04-28 21:49:18.102981] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f, host: fed-gfs4, port: 0
[2014-04-28 21:49:18.138936] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f
[2014-04-28 21:49:18.138982] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: c7a11029-12ab-4e3a-b898-7c62e98fa4d1, hostname:fed-gfs3
[2014-04-28 21:49:18.138995] I [glusterd-handler.c:2266:__glusterd_handle_friend_update] 0-: Received my uuid as Friend
[2014-04-28 21:49:18.179134] I [glusterd-handshake.c:563:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 2
[2014-04-28 21:49:18.199020] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f
[2014-04-28 21:49:18.199111] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to fed-gfs4 (0), ret: 0
[2014-04-28 21:49:18.222248] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f, host: fed-gfs4
[2014-04-28 21:49:18.262901] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f
[2014-04-28 21:49:20.401429] I [glusterd-handler.c:1169:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-04-28 21:49:20.402072] I [glusterd-handler.c:1169:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git<http://git.gluster.com/glusterfs.git>
signal received: 6
time of crash: 2014-04-28 21:53:12configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.5.0
/lib64/libc.so.6(+0x329a0)[0x7eff4946e9a0]
/lib64/libc.so.6(gsignal+0x35)[0x7eff4946e925]
/lib64/libc.so.6(abort+0x175)[0x7eff49470105]
/usr/lib64/libcrypto.so.10(+0x67ebf)[0x7eff49837ebf]
/usr/lib64/libcrypto.so.10(MD5_Init+0x49)[0x7eff4983e619]
/usr/lib64/libcrypto.so.10(MD5+0x3a)[0x7eff4983e9ea]
/usr/lib64/libglusterfs.so.0(md5_wrapper+0x3c)[0x7eff4ae6091c]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_set_socket_filepath+0x72)[0x7eff45e02b72]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_set_brick_socket_filepath+0x158)[0x7eff45e02df8]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_volume_start_glusterfs+0x4c9)[0x7eff45e094a9]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_brick_start+0x119)[0x7eff45e0af29]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_op_start_volume+0xfd)[0x7eff45e45a8d]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_op_commit_perform+0x53b)[0x7eff45df471b]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(gd_commit_op_phase+0xbe)[0x7eff45e5193e]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x2c2)[0x7eff45e53632]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7eff45e5376b]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(__glusterd_handle_cli_start_volume+0x1b6)[0x7eff45e46cc6]
/usr/lib64/glusterfs/3.5.0/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7eff45ddaf7f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7eff4ae818e2]
/lib64/libc.so.6(+0x43bf0)[0x7eff4947fbf0]

Interestingly enough I downloaded and installed  3.5.0 because this same thing happened with  3.4.2 on Red Hat 6.5 instances in AWS as well.  I tore down and rebuilt them with all the same results.    Is there a library its conflicting with?  I am still new to gluster, so I don’t know too many of the details of what to look for when it crashes.


Thanks for any help.

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140429/98f25ba6/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 47942 bytes
Desc: image001.jpg
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140429/98f25ba6/attachment.jpg>


More information about the Gluster-users mailing list