[Gluster-devel] [Gluster-users] Gluster on an ARM system

Charles Williams chuck at itadmins.net
Mon Aug 15 11:47:04 UTC 2011


Devon,

Our NAS boxes will not be getting Debian. Due to support issues we will
continue using the default Linux installs that are on the boxes. We are
installing extra packages to get things working.

Chuck


On 08/13/2011 12:50 AM, Devon Miller wrote:
> For what it's worth, I've been running 3.2.0 for about 4 months now on
> ARM processors  (Globalscale SheevaPlug (armv5tel) running Debian
> squeeze). I have 4 volumes, each running 2 bricks in replicated mode. I
> haven't seen anything like this.
> 
> dcm
> 
> On Fri, Aug 12, 2011 at 7:24 AM, Charles Williams <chuck at itadmins.net
> <mailto:chuck at itadmins.net>> wrote:
> 
>     As discussed with avati in IRC. I am able to setup a user account on the
>     ARM box. I have also done a bit more tracing and have attached an strace
>     of glusterd from startup to peer probe to core dump.
> 
>     chuck
> 
>     On 08/11/2011 08:50 PM, John Mark Walker wrote:
>     > Hi Charles,
>     >
>     > We have plans in the future to work on an ARM port, but that won't
>     come to fruition for some time.
>     >
>     > I've CC'd the gluster-devel list in the hopes that someone there
>     can help you out. However, my understanding is that it will take
>     some significant porting to get GlusterFS to run in any production
>     capacity on ARM.
>     >
>     > Once we have more news on the ARM front, I'll be happy to share it
>     here and elsewhere.
>     >
>     > Please send all responses to gluster-devel, as that is the proper
>     place for this conversation.
>     >
>     > Thanks,
>     > John Mark Walker
>     > Gluster Community Guy
>     >
>     > ________________________________________
>     > From: gluster-users-bounces at gluster.org
>     <mailto:gluster-users-bounces at gluster.org>
>     [gluster-users-bounces at gluster.org
>     <mailto:gluster-users-bounces at gluster.org>] on behalf of Charles
>     Williams [chuck at itadmins.net <mailto:chuck at itadmins.net>]
>     > Sent: Thursday, August 11, 2011 3:48 AM
>     > To: gluster-users at gluster.org <mailto:gluster-users at gluster.org>
>     > Subject: Re: [Gluster-users] Gluster on an ARM system
>     >
>     > OK, running glusterd on the ARM box with gdb and then doing a gluster
>     > peer probe zmn1 I get the following from gdb when glusterd core dumps:
>     >
>     > [2011-08-11 12:46:35.326998] D
>     > [glusterd-utils.c:2627:glusterd_friend_find_by_hostname] 0-glusterd:
>     > Friend zmn1 found.. state: 0
>     >
>     > Program received signal SIGSEGV, Segmentation fault.
>     > 0x4008e954 in rpc_transport_connect (this=0x45c48, port=0) at
>     > rpc-transport.c:810
>     > 810             ret = this->ops->connect (this, port);
>     > (gdb)
>     >
>     >
>     > On 08/11/2011 10:49 AM, Charles Williams wrote:
>     >> sorry,
>     >>
>     >> that last lines of the debug info should be:
>     >>
>     >> [2011-08-11 10:38:21.499022] D
>     >> [glusterd-utils.c:2627:glusterd_friend_find_by_hostname] 0-glusterd:
>     >> Friend zmn1 found.. state: 0
>     >> Segmentation fault (core dumped)
>     >>
>     >>
>     >>
>     >> On 08/11/2011 10:46 AM, Charles Williams wrote:
>     >>> Hey all,
>     >>>
>     >>> So I went ahead and did a test install on my QNAP TS412U (ARM
>     based) and
>     >>> all went well with the build and install. The problems started
>     afterwards.
>     >>>
>     >>> QNAP (ARM server) config:
>     >>>
>     >>> volume management-zmn1
>     >>>     type mgmt/glusterd
>     >>>     option working-directory /opt/etc/glusterd
>     >>>     option transport-type socket
>     >>>     option transport.address-family inet
>     >>>     option transport.socket.keepalive-time 10
>     >>>     option transport.socket.keepalive-interval 2
>     >>> end-volume
>     >>>
>     >>>
>     >>> zmn1 (Dell PowerEdge) config:
>     >>>
>     >>> volume management
>     >>>     type mgmt/glusterd
>     >>>     option working-directory /etc/glusterd
>     >>>     option transport-type socket
>     >>>     option transport.address-family inet
>     >>>     option transport.socket.keepalive-time 10
>     >>>     option transport.socket.keepalive-interval 2
>     >>> end-volume
>     >>>
>     >>>
>     >>> When I tried to do a peer probe from the QNAP server to add the
>     first
>     >>> server into the cluster glusterd seg faulted with a core dump:
>     >>>
>     >>> [2011-08-11 10:38:21.457839] I
>     >>> [glusterd-handler.c:623:glusterd_handle_cli_probe] 0-glusterd:
>     Received
>     >>> CLI probe req zmn1 24007
>     >>> [2011-08-11 10:38:21.459508] D
>     >>> [glusterd-utils.c:213:glusterd_is_local_addr] 0-glusterd: zmn1
>     is not local
>     >>> [2011-08-11 10:38:21.460162] D
>     >>> [glusterd-utils.c:2675:glusterd_friend_find_by_hostname] 0-glusterd:
>     >>> Unable to find friend: zmn1
>     >>> [2011-08-11 10:38:21.460682] D
>     >>> [glusterd-utils.c:2675:glusterd_friend_find_by_hostname] 0-glusterd:
>     >>> Unable to find friend: zmn1
>     >>> [2011-08-11 10:38:21.460766] I
>     >>> [glusterd-handler.c:391:glusterd_friend_find] 0-glusterd: Unable
>     to find
>     >>> hostname: zmn1
>     >>> [2011-08-11 10:38:21.460843] I
>     >>> [glusterd-handler.c:3417:glusterd_probe_begin] 0-glusterd: Unable to
>     >>> find peerinfo for host: zmn1 (24007)
>     >>> [2011-08-11 10:38:21.460943] D
>     >>> [glusterd-utils.c:3080:glusterd_sm_tr_log_init] 0-: returning 0
>     >>> [2011-08-11 10:38:21.461017] D
>     >>> [glusterd-utils.c:3169:glusterd_peerinfo_new] 0-: returning 0
>     >>> [2011-08-11 10:38:21.461199] D
>     >>>
>     [glusterd-handler.c:3323:glusterd_transport_inet_keepalive_options_build]
>     0-glusterd:
>     >>> Returning 0
>     >>> [2011-08-11 10:38:21.465952] D
>     [rpc-clnt.c:914:rpc_clnt_connection_init]
>     >>> 0-management-zmn1: defaulting frame-timeout to 30mins
>     >>> [2011-08-11 10:38:21.466146] D
>     [rpc-transport.c:672:rpc_transport_load]
>     >>> 0-rpc-transport: attempt to load file
>     >>> /opt/lib/glusterfs/3.2.2/rpc-transport/socket.so
>     >>> [2011-08-11 10:38:21.466346] D
>     >>> [rpc-transport.c:97:__volume_option_value_validate]
>     0-management-zmn1:
>     >>> no range check required for 'option
>     transport.socket.keepalive-time 10'
>     >>> [2011-08-11 10:38:21.466460] D
>     >>> [rpc-transport.c:97:__volume_option_value_validate]
>     0-management-zmn1:
>     >>> no range check required for 'option
>     transport.socket.keepalive-interval 2'
>     >>> [2011-08-11 10:38:21.466570] D
>     >>> [rpc-transport.c:97:__volume_option_value_validate]
>     0-management-zmn1:
>     >>> no range check required for 'option remote-port 24007'
>     >>> [2011-08-11 10:38:21.467862] D [common-utils.c:151:gf_resolve_ip6]
>     >>> 0-resolver: returning ip-10.1.0.1 (port-24007) for hostname:
>     zmn1 and
>     >>> port: 24007
>     >>> [2011-08-11 10:38:21.468417] D
>     >>> [glusterd-handler.c:3277:glusterd_rpc_create] 0-: returning 0
>     >>> [2011-08-11 10:38:21.468576] D
>     >>> [glusterd-store.c:1728:glusterd_store_create_peer_dir] 0-:
>     Returning with 0
>     >>> [2011-08-11 10:38:21.468811] D
>     >>> [glusterd-store.c:981:glusterd_store_handle_new] 0-: Returning 0
>     >>> [2011-08-11 10:38:21.469130] D
>     >>> [glusterd-store.c:936:glusterd_store_save_value] 0-: returning: 0
>     >>> [2011-08-11 10:38:21.469285] D
>     >>> [glusterd-store.c:936:glusterd_store_save_value] 0-: returning: 0
>     >>> [2011-08-11 10:38:21.469418] D
>     >>> [glusterd-store.c:936:glusterd_store_save_value] 0-: returning: 0
>     >>> [2011-08-11 10:38:21.469490] D
>     >>> [glusterd-store.c:1842:glusterd_store_peer_write] 0-: Returning
>     with 0
>     >>> [2011-08-11 10:38:21.497268] D
>     >>> [glusterd-store.c:1870:glusterd_store_perform_peer_store] 0-:
>     Returning 0
>     >>> [2011-08-11 10:38:21.497391] D
>     >>> [glusterd-store.c:1891:glusterd_store_peerinfo] 0-: Returning with 0
>     >>> [2011-08-11 10:38:21.497469] I
>     >>> [glusterd-handler.c:3399:glusterd_friend_add] 0-glusterd:
>     connect returned 0
>     >>> [2011-08-11 10:38:21.497542] D
>     >>> [glusterd-handler.c:3448:glusterd_probe_begin] 0-: returning 100
>     >>> [2011-08-11 10:38:21.497791] D
>     >>> [glusterd-handler.c:3849:glusterd_peer_rpc_notify]
>     0-management-zmn1:
>     >>> got RPC_CLNT_CONNECT
>     >>> [2011-08-11 10:38:21.498576] D
>     >>> [glusterd-handshake.c:308:glusterd_set_clnt_mgmt_program] 0-:
>     GF-DUMP
>     >>> (123451501:1) not supported
>     >>> [2011-08-11 10:38:21.498685] I
>     >>> [glusterd-handshake.c:317:glusterd_set_clnt_mgmt_program] 0-: Using
>     >>> Program glusterd clnt mgmt, Num (1238433), Version (1)
>     >>> [2011-08-11 10:38:21.498777] D
>     >>> [glusterd-sm.c:893:glusterd_friend_sm_inject_event] 0-glusterd:
>     >>> Enqueuing event: 'GD_FRIEND_EVENT_CONNECTED'
>     >>> [2011-08-11 10:38:21.498854] D
>     >>> [glusterd-handshake.c:274:glusterd_event_connected_inject] 0-:
>     returning 0
>     >>> [2011-08-11 10:38:21.498927] D
>     [glusterd-sm.c:948:glusterd_friend_sm]
>     >>> 0-: Dequeued event of type: 'GD_FRIEND_EVENT_CONNECTED'
>     >>> [2011-08-11 10:38:21.499022] D
>     >>> [glusterd-utils.c:2627:glusterd_friend_find_by_hostname] 0-glusterd:
>     >>> Friend zmn1 found.. state: 0
>     >>>
>     >>>
>     >>>
>     >>> After restarting glusterd on the QNAP box I did a peer status and
>     >>> recieved the following:
>     >>>
>     >>> [admin at NASC123B8 ~]# /opt/sbin/gluster peer status
>     >>> Number of Peers: 1
>     >>>
>     >>> Hostname: zmn1
>     >>> Uuid: 00000000-0000-0000-0000-000000000000
>     >>> State: Establishing Connection (Connected)
>     >>>
>     >>>
>     >>> If I stop glusterd on both servers and delete /etc/glusterd on both,
>     >>> then restart I always get the same result.
>     >>>
>     >>> Any ideas?
>     >>>
>     >>> thanks,
>     >>> Chuck
>     >>> _______________________________________________
>     >>> Gluster-users mailing list
>     >>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     >>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>     >>
>     >> _______________________________________________
>     >> Gluster-users mailing list
>     >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>     >
>     > _______________________________________________
>     > Gluster-users mailing list
>     > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
>     _______________________________________________
>     Gluster-users mailing list
>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 





More information about the Gluster-devel mailing list