[Gluster-users] 42 node gluster volume create fails silently

Prasad, Nirmal nprasad at idirect.net
Tue Apr 1 20:02:53 UTC 2014


Got all the nodes up and running - Does anyone know if the following bug 

https://bugzilla.redhat.com/show_bug.cgi?id=1065296

will apply to 

1. gluster peer probe - it might be helpful to have gluster peer probe host1 host2 host3 .. hostn
2. gluster volume create/add-brick - looks like glusterd reaches out to all the nodes and updates them - will see code but if they proceed in parallel it may be able to do this much faster.


[root at node1 ~]# gluster volume info

Volume Name: gl_disk
Type: Distributed-Replicate
Volume ID: c703d054-c30a-48f5-88fd-6ab77dc19092
Status: Started
Number of Bricks: 21 x 2 = 42
Transport-type: tcp

-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Prasad, Nirmal
Sent: Monday, March 31, 2014 8:33 PM
To: Joe Julian
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] 42 node gluster volume create fails silently

3.4.2 - tracing out the problem on other nodes. I think they had something left out partially. The probe and addition process could definitely use some speed - plumbing should be quick - fun is in the data ....

Cleared out with - 

setfattr -x trusted.glusterfs.volume-id <mount> setfattr -x trusted.gfid <mount>


-----Original Message-----
From: Joe Julian [mailto:joe at julianfamily.org]
Sent: Monday, March 31, 2014 7:06 PM
To: Prasad, Nirmal
Subject: Re: [Gluster-users] 42 node gluster volume create fails silently

What version?

On 03/31/2014 03:59 PM, Prasad, Nirmal wrote:
> Ok - for some reason it did not like 6 of my nodes - but able to add 34 nodes two at a time - may be client can do the similar split internally based on replica count. The failure from add-brick is simple "volume add-brick: failed: "
>
> gluster volume info
>
> Volume Name: gl_disk
> Type: Distributed-Replicate
> Volume ID: c70d525e-a255-41e2-af03-718d6dec0319
> Status: Created
> Number of Bricks: 17 x 2 = 34
> Transport-type: tcp
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org 
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Prasad, Nirmal
> Sent: Monday, March 31, 2014 6:20 PM
> To: Dan Lambright
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] 42 node gluster volume create fails 
> silently
>
> Hi Dan,
>
> Thanks for the quick response. I'm trying to create the volume so have not reached this stage - the client exited out when I gave it :
>
> gluster volume create <vol-name> replica 2 server1:.. server2:.. .... server41:.. server42:..
>
> If I do :
>
> gluster volume create <vol-name> replica 2 server1:.. server2:..
> gluster volume add-brick <vol-name> replica 2 server3:.. server4:..
>
> it gets me farther ... looks like there is some timeout for the gluster command - not sure - just an observation.
>
> Thanks
> Regards
> Nirmal
> -----Original Message-----
> From: Dan Lambright [mailto:dlambrig at redhat.com]
> Sent: Monday, March 31, 2014 6:16 PM
> To: Prasad, Nirmal
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] 42 node gluster volume create fails 
> silently
>
> Hello,
>
> the cli logs do not contain much. If you remount your gluster volume, and create the problem again, there may be more to see.
>
> On the client side:
>
> mount -t glusterfs  -o log-level=DEBUG,log-file=/tmp/my_client.log 
> 10.16.159.219:/myvol /mnt
>
> On the server side:
>
> gluster volume set myvol diagnostics.brick-sys-log-level WARNING 
> gluster volume set myvol diagnostics.brick-log-level WARNING
>
> You could then attach the most recent log files to your email, or the parts that seem relevant so the email is not too large.
>   
> /tmp/my_client.log
> /var/log/glusterfs/etc*.log
> /var/log/glusterfs/bricks/*.log
>
> ----- Original Message -----
> From: "Nirmal Prasad" <nprasad at idirect.net>
> To: gluster-users at gluster.org
> Sent: Monday, March 31, 2014 6:04:31 PM
> Subject: Re: [Gluster-users] 42 node gluster volume create fails 
> silently
>
>
>
> .. and glusterd died. I had success adding individually up to 21 nodes – will go down that path. Anyone interested in log files or core files?
>
>
>
> service glusterd status
>
> glusterd dead but pid file exists
>
>
>
>
> From: gluster-users-bounces at gluster.org 
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Prasad, Nirmal
> Sent: Monday, March 31, 2014 5:57 PM
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] 42 node gluster volume create fails 
> silently
>
>
>
>
> Looks symptomatic of some timeout – subsequent status command gave:
>
>
>
> gluster volume status
>
> Another transaction is in progress. Please try again after sometime.
>
>
>
>
> From: gluster-users-bounces at gluster.org [ 
> mailto:gluster-users-bounces at gluster.org ] On Behalf Of Prasad, Nirmal
> Sent: Monday, March 31, 2014 5:53 PM
> To: gluster-users at gluster.org
> Subject: [Gluster-users] 42 node gluster volume create fails silently
>
>
>
>
> Not much of output – not sure where to see. This is the output in the cli.log – There are 42 servers and (21 brick pairs) – timeout perhaps ??
>
>
>
> [2014-03-31 13:44:34.228467] I 
> [cli-cmd-volume.c:1336:cli_check_gsync_present] 0-: geo-replication 
> not installed
>
> [2014-03-31 13:44:34.229619] I [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate cluster type found. Checking brick order.
>
> [2014-03-31 13:44:34.230821] I 
> [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order 
> okay
>
> [2014-03-31 13:44:47.758977] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
>
> [2014-03-31 13:44:47.763286] I [socket.c:3480:socket_init] 
> 0-glusterfs: SSL support is NOT enabled
>
> [2014-03-31 13:44:47.763326] I [socket.c:3495:socket_init] 
> 0-glusterfs: using system polling thread
>
> [2014-03-31 13:44:47.777000] I 
> [cli-cmd-volume.c:1336:cli_check_gsync_present] 0-: geo-replication 
> not installed
>
> [2014-03-31 13:44:47.780574] I 
> [cli-rpc-ops.c:332:gf_cli_list_friends_cbk] 0-cli: Received resp to 
> list: 0
>
> [2014-03-31 13:44:47.782086] I [input.c:36:cli_batch] 0-: Exiting 
> with: 0
>
> [2014-03-31 13:46:34.231761] I [input.c:36:cli_batch] 0-: Exiting 
> with: 110
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list