[Gluster-users] faied start the glusterd after reboot

songxin songxin_1980 at 126.com
Thu Feb 25 10:42:10 UTC 2016


Thanks for your reply.


Do I need check all files in /var/lib/glusterd/*? 
Must all files be same in A node and B node?


I found that the size of file  /var/lib/glusterd/snaps/.nfs0000000001722f4000000002 is 0 bytes after A board reboot.
So glusterd can't restore by this snap file on A node.
Is it right?








At 2016-02-25 18:25:50, "Atin Mukherjee" <amukherj at redhat.com> wrote:
>I believe you and Abhishek are from the same group and sharing the
>common set up. Could you check the content of /var/lib/glusterd/* in
>board B (post reboot and before starting glusterd) matches with
>/var/lib/glusterd/* from board A?
>
>~Atin
>
>On 02/25/2016 03:48 PM, songxin wrote:
>> Hi,
>> I have a problem as below when I start the gluster after reboot a board.
>> 
>> precondition: 
>> I use two boards do this test.
>> The version of glusterfs is 3.7.6.
>> 
>> A board ip:128.224.162.255 
>> B board ip:128.224.95.140 
>> 
>> reproduce steps:
>> 
>> 1.systemctl start glusterd (A board) 
>> 2.systemctl start glusterd (B board) 
>> 3.gluster peer probe 128.224.95.140 (A board) 
>> 4.gluster volume create gv0 replica 2 128.224.95.140:/tmp/brick1/gv0
>> 128.224.162.255:/data/brick/gv0 force (local board)
>> 5.gluster volume start gv0 (A board) 
>> 6.press the reset button on the A board.It is a develop board so it has
>> a reset button that is similar to reset button on pc (A board) 
>> 7.run command "systemctl start glusterd" after A board reboot. And
>> command failed because the file /var/lib/glusterd/snaps/.nfsxxxxxxxxx
>> (local board) .
>> Log is as below.
>> [2015-12-07 07:55:38.260084] E [MSGID: 101032]
>> [store.c:434:gf_store_handle_retrieve] 0-: Path corresponding to
>> /var/lib/glusterd/snaps/.nfs0000000001722f4000000002
>> [2015-12-07 07:55:38.260120] D [MSGID: 0]
>> [store.c:439:gf_store_handle_retrieve] 0-: Returning -1                
>>                                               
>> [2015-12-07 07:55:38.260152] E [MSGID: 106200]
>> [glusterd-store.c:3332:glusterd_store_update_snap] 0-management: snap
>> handle is NULL                                 
>> [2015-12-07 07:55:38.260180] E [MSGID: 106196]
>> [glusterd-store.c:3427:glusterd_store_retrieve_snap] 0-management:
>> Failed to update snapshot for .nfs0000000001722f40
>> [2015-12-07 07:55:38.260208] E [MSGID: 106043]
>> [glusterd-store.c:3589:glusterd_store_retrieve_snaps] 0-management:
>> Unable to restore snapshot: .nfs0000000001722f400
>> [2015-12-07 07:55:38.260241] D [MSGID: 0]
>> [glusterd-store.c:3607:glusterd_store_retrieve_snaps] 0-management:
>> Returning with -1                              
>> [2015-12-07 07:55:38.260268] D [MSGID: 0]
>> [glusterd-store.c:4339:glusterd_restore] 0-management: Returning -1    
>>                                                  
>> [2015-12-07 07:55:38.260325] E [MSGID: 101019]
>> [xlator.c:428:xlator_init] 0-management: Initialization of volume
>> 'management' failed, review your volfile again    
>> [2015-12-07 07:55:38.260355] E [graph.c:322:glusterfs_graph_init]
>> 0-management: initializing translator failed                            
>>                          
>> [2015-12-07 07:55:38.260374] E [graph.c:661:glusterfs_graph_activate]
>> 0-graph: init failed                                 
>> 
>> 8.rm /var/lib/glusterd/snaps/.nfsxxxxxxxxx (A board) 
>> 9..run command "systemctl start glusterd" and success. 
>> 10.at this point the peer status is Peer in Cluster (Connected) and all
>> process is online. 
>> 
>> If a node abnormal reset, must I remove
>> the  /var/lib/glusterd/snaps/.nfsxxxxxx before starting the glusterd?
>> 
>> I want to know if it is nomal.
>> 
>> Thanks,
>> Xin
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>  
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160225/db8534d5/attachment.html>


More information about the Gluster-users mailing list