[Gluster-devel] Messup with peer status!!
ABHISHEK PALIWAL
abhishpaliwal at gmail.com
Wed Mar 16 07:35:33 UTC 2016
Hi Atin,
I have the board present in faulty state can we setup the live session to
debug it?
Please provide the steps to setup hangout session.
Regards,
Abhishek
On Wed, Mar 16, 2016 at 11:23 AM, Atin Mukherjee <amukherj at redhat.com>
wrote:
> [1970-01-01 00:02:05.860202] D [MSGID: 0]
> [store.c:501:gf_store_iter_new] 0-: Returning with 0
> [1970-01-01 00:02:05.860518] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860545] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = type value = 2
> [1970-01-01 00:02:05.860583] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860609] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = count value = 2
> [1970-01-01 00:02:05.860650] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860676] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = status value = 1
> [1970-01-01 00:02:05.860717] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860743] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = sub_count value = 2
> [1970-01-01 00:02:05.860780] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860806] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = stripe_count value = 1
> [1970-01-01 00:02:05.860842] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860868] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = replica_count value = 2
> [1970-01-01 00:02:05.860905] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860931] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = disperse_count value = 0
> [1970-01-01 00:02:05.860967] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.860994] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = redundancy_count value = 0
> [1970-01-01 00:02:05.861030] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861056] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = version value = 42
> [1970-01-01 00:02:05.861093] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861118] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = transport-type value = 0
> [1970-01-01 00:02:05.861155] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861182] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = volume-id value = d86e215c-1710-4b33-8076-fbf8e075d3e7
> [1970-01-01 00:02:05.861290] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861317] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = username value = db1d21cb-3feb-41da-88d0-2fc7a34cdb3a
> [1970-01-01 00:02:05.861361] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861387] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = password value = df5bf0b7-34dd-4f0d-a01b-62d2b67aa8b0
> [1970-01-01 00:02:05.861426] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861455] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = op-version value = 3
> [1970-01-01 00:02:05.861503] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861530] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = client-op-version value = 3
> [1970-01-01 00:02:05.861568] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861594] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = quota-version value = 0
> [1970-01-01 00:02:05.861632] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861658] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = parent_volname value = N/A
> [1970-01-01 00:02:05.861696] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861722] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = restored_from_snap value = 00000000-0000-0000-0000-000000000000
> [1970-01-01 00:02:05.861762] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861788] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = snap-max-hard-limit value = 256
> [1970-01-01 00:02:05.861825] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.861851] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = nfs.disable value = on
> [1970-01-01 00:02:05.861940] D [MSGID: 0]
> [glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:
> Parsed as Volume-set:key=nfs.disable,value:on
> [1970-01-01 00:02:05.861978] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.862004] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = network.ping-timeout value = 4
> [1970-01-01 00:02:05.862039] D [MSGID: 0]
> [glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:
> Parsed as Volume-set:key=network.ping-timeout,value:4
> [1970-01-01 00:02:05.862077] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.862104] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = performance.readdir-ahead value = on
> [1970-01-01 00:02:05.862140] D [MSGID: 0]
> [glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:
> Parsed as Volume-set:key=performance.readdir-ahead,value:on
> [1970-01-01 00:02:05.862178] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.862217] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = brick-0 value = 10.32.0.48:-opt-lvmdir-c2-brick
> [1970-01-01 00:02:05.862257] D [MSGID: 0]
> [store.c:613:gf_store_iter_get_next] 0-: Returning with 0
> [1970-01-01 00:02:05.862283] D [MSGID: 0]
> [glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key
> = brick-1 value = 10.32.1.144:-opt-lvmdir-c2-brick
>
> On 03/16/2016 11:04 AM, ABHISHEK PALIWAL wrote:
> > Hi Atin,
> >
> > Please tell me the line number where you areseeing that glusterd has
> > restored value from the disk files in Board B file.
> >
> > Regards,
> > Abhishek
> >
> > On Tue, Mar 15, 2016 at 11:31 AM, ABHISHEK PALIWAL
> > <abhishpaliwal at gmail.com <mailto:abhishpaliwal at gmail.com>> wrote:
> >
> >
> >
> > On Tue, Mar 15, 2016 at 11:10 AM, Atin Mukherjee
> > <amukherj at redhat.com <mailto:amukherj at redhat.com>> wrote:
> >
> >
> >
> > On 03/15/2016 10:54 AM, ABHISHEK PALIWAL wrote:
> > > Hi Atin,
> > >
> > > Is these files are ok? or you need some other files.
> > I just started going through the log files you shared. I've few
> > questions for you looking at the log:
> > 1. Are you sure the log what you have provided from board B is
> > post a
> > reboot? If you claim that a reboot wipes of /var/lib/glusterd/
> > then why
> > am I seeing that glusterd has restored value from the disk files?
> >
> >
> > Yes these logs from Board B after reboot. Could you please explain
> > me the line number where you are seeing that glusterd has restored
> > value from the disk files.
> >
> >
> > 2. From the content of glusterd configurations which you shared
> > earlier
> > the peer UUIDs are 4bf982c0-b21b-415c-b870-e72f36c7f2e7,
> > 4bf982c0-b21b-415c-b870-e72f36c7f2e7 002500/glusterd/peers &
> > c6b64e36-76da-4e98-a616-48e0e52c7006 from 000300/glusterd/peers.
> > They
> > don't even exist in glusterd.log.
> >
> > Somehow I have a feeling that the sequence of log and
> configurations
> > files you shared don't match!
> >
> >
> > There is two UUID file present in 002500/glusterd/peers
> > 1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7
> > Content of this file is:
> > uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7
> > state=10
> > hostname1=10.32.0.48
> > I have a question from where this UUID is coming?
> >
> > 2. 98a28041-f853-48ac-bee0-34c592eeb827
> > Content of this file is:
> > uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to
> > 000300(10.32.0.48) board you can check this in both of the glusterd
> > log file
> > state=4 //what this state field display in this file?
> > hostname1=10.32.0.48
> >
> >
> > There is only one UUID file is present on 00030/glusterd/peers
> >
> > c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the
> > 002500 board before reboot
> >
> > content of this file is:
> >
> > uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID
> > generated by the 002500 board after reboot you can check this as
> > well in glusterd file of 00030 board.
> > state=3
> > hostname1=10.32.1.144
> >
> >
> > ~Atin
> >
> > >
> > > Regards,
> > > Abhishek
> > >
> > > On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL
> > > <abhishpaliwal at gmail.com <mailto:abhishpaliwal at gmail.com>
> > <mailto:abhishpaliwal at gmail.com
> > <mailto:abhishpaliwal at gmail.com>>> wrote:
> > >
> > > You mean etc*-glusterd-*.log file from both of the boards?
> > >
> > > if yes please find the attachment for the same.
> > >
> > > On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee <
> amukherj at redhat.com <mailto:amukherj at redhat.com>
> > > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>>
> wrote:
> > >
> > >
> > >
> > > On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:
> > > > I am not getting you which glusterd directory you
> are asking. if you are
> > > > asking about the /var/lib/glusterd directory then
> which I shared earlier
> > > > is the same.
> > > 1. Go to /var/log/glusterfs directory
> > > 2. Look for glusterd log file
> > > 3. attach the log
> > > Do it for both the boards.
> > > >
> > > > I have two directories related to gluster
> > > >
> > > > 1. /var/log/glusterfs
> > > > 2./var/lib/glusterd
> > > >
> > > > On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee <
> amukherj at redhat.com <mailto:amukherj at redhat.com>
> > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>
> > > > <mailto:amukherj at redhat.com <mailto:
> amukherj at redhat.com>
> > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>>>
> wrote:
> > > >
> > > >
> > > >
> > > > On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:
> > > > > I have only these glusterd files available on
> the nodes
> > > > Look for etc-*-glusterd*.log in
> /var/log/glusterfs, that represents the
> > > > glusterd log file.
> > > > >
> > > > > Regards,
> > > > > Abhishek
> > > > >
> > > > > On Mon, Mar 14, 2016 at 3:43 PM, Atin
> Mukherjee <amukherj at redhat.com <mailto:amukherj at redhat.com>
> > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>
> > > <mailto:amukherj at redhat.com <mailto:
> amukherj at redhat.com>
> > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>>
> > > > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>
> > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>>>> wrote:
> > > > >
> > > > >
> > > > >
> > > > > On 03/14/2016 02:18 PM, ABHISHEK PALIWAL
> > wrote:
> > > > > >
> > > > > >
> > > > > > On Mon, Mar 14, 2016 at 12:12 PM, Atin
> > Mukherjee
> > > <amukherj at redhat.com <mailto:amukherj at redhat.com>
> > <mailto:amukherj at redhat.com <mailto:amukherj at redhat.com>>
> > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>>
> > > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>
> > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>>>
> > > > > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>
> > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>
> > > <mailto:amukherj at redhat.com <mailto:
> amukherj at redhat.com>>>
> > > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>
> > > <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com> <mailto:amukherj at redhat.com
> > <mailto:amukherj at redhat.com>>>>>> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 03/14/2016 10:52 AM, ABHISHEK
> > PALIWAL wrote:
> > > > > > > Hi Team,
> > > > > > >
> > > > > > > I am facing some issue with peer
> > status and
> > > because of
> > > > that
> > > > > remove-brick
> > > > > > > on replica volume is getting
> failed.
> > > > > > >
> > > > > > > Here. is the scenario what I am
> > doing with
> > > gluster:
> > > > > > >
> > > > > > > 1. I have two boards A & B and
> > gluster is
> > > running on
> > > > both of
> > > > > the boards.
> > > > > > > 2. On board I have created a
> > replicated
> > > volume with one
> > > > > brick on each
> > > > > > > board.
> > > > > > > 3. Created one glusterfs mount
> > point where
> > > both of
> > > > brick are
> > > > > mounted.
> > > > > > > 4. start the volume with
> > nfs.disable=true.
> > > > > > > 5. Till now everything is in sync
> > between
> > > both of bricks.
> > > > > > >
> > > > > > > Now when I manually plug-out the
> > board B
> > > from the slot and
> > > > > plug-in it again.
> > > > > > >
> > > > > > > 1. After bootup the board B I have
> > started
> > > the glusted on
> > > > > the board B.
> > > > > > >
> > > > > > > Following are the some gluster
> command
> > > output on Board B
> > > > > after the step 1.
> > > > > > >
> > > > > > > # gluster peer status
> > > > > > > Number of Peers: 2
> > > > > > >
> > > > > > > Hostname: 10.32.0.48
> > > > > > > Uuid:
> > f4ebe3c5-b6a4-4795-98e0-732337f76faf
> > > > > > > State: Accepted peer request
> > (Connected)
> > > > > > >
> > > > > > > Hostname: 10.32.0.48
> > > > > > > Uuid:
> > 4bf982c0-b21b-415c-b870-e72f36c7f2e7
> > > > > > > State: Peer is connected and
> Accepted
> > > (Connected)
> > > > > > >
> > > > > > > Why this peer status is showing
> > two peer with
> > > > different UUID?
> > > > > > GlusterD doesn't generate a new UUID
> > on init
> > > if it has
> > > > already
> > > > > generated
> > > > > > an UUID earlier. This clearly
> > indicates that
> > > on reboot
> > > > of board B
> > > > > > content of /var/lib/glusterd were
> > wiped off.
> > > I've asked this
> > > > > question to
> > > > > > you multiple times that is it the
> case?
> > > > > >
> > > > > >
> > > > > > Yes I am following the same which is
> > mentioned in
> > > the link:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
> > > > > >
> > > > > > but why it is showing two peer enteries?
> > > > > >
> > > > > > >
> > > > > > > # gluster volume info
> > > > > > >
> > > > > > > Volume Name: c_glusterfs
> > > > > > > Type: Replicate
> > > > > > > Volume ID:
> > c11f1f13-64a0-4aca-98b5-91d609a4a18d
> > > > > > > Status: Started
> > > > > > > Number of Bricks: 1 x 2 = 2
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1:
> > 10.32.0.48:/opt/lvmdir/c2/brick
> > > > > > > Brick2:
> > 10.32.1.144:/opt/lvmdir/c2/brick
> > > > > > > Options Reconfigured:
> > > > > > > performance.readdir-ahead: on
> > > > > > > network.ping-timeout: 4
> > > > > > > nfs.disable: on
> > > > > > > # gluster volume heal c_glusterfs
> info
> > > > > > > c_glusterfs: Not able to fetch
> > volfile from
> > > glusterd
> > > > > > > Volume heal failed.
> > > > > > > # gluster volume status c_glusterfs
> > > > > > > Status of volume: c_glusterfs
> > > > > > > Gluster process
> > > TCP Port
> > > > RDMA Port
> > > > > > Online
> > > > > > > Pid
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > > > >
> > > > > > > Brick 10.32.1.144:
> /opt/lvmdir/c2/brick
> > > N/A N/A
> > > > > N
> > > > > > > N/A
> > > > > > > Self-heal Daemon on localhost
> > > N/A N/A
> > > > > Y
> > > > > > > 3922
> > > > > > >
> > > > > > > Task Status of Volume c_glusterfs
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > > > >
> > > > > > > There are no active volume tasks
> > > > > > > --
> > > > > > >
> > > > > > > At the same time Board A have the
> > following
> > > gluster
> > > > commands
> > > > > outcome:
> > > > > > >
> > > > > > > # gluster peer status
> > > > > > > Number of Peers: 1
> > > > > > >
> > > > > > > Hostname: 10.32.1.144
> > > > > > > Uuid:
> > c6b64e36-76da-4e98-a616-48e0e52c7006
> > > > > > > State: Peer in Cluster (Connected)
> > > > > > >
> > > > > > > Why it is showing the older UUID
> > of host
> > > 10.32.1.144
> > > > when this
> > > > > > UUID has
> > > > > > > been changed and new UUID is
> > > > > 267a92c3-fd28-4811-903c-c1d54854bda9
> > > > > > >
> > > > > > >
> > > > > > > # gluster volume heal c_glusterfs
> info
> > > > > > > c_glusterfs: Not able to fetch
> > volfile from
> > > glusterd
> > > > > > > Volume heal failed.
> > > > > > > # gluster volume status c_glusterfs
> > > > > > > Status of volume: c_glusterfs
> > > > > > > Gluster process
> > > TCP Port
> > > > RDMA Port
> > > > > > Online
> > > > > > > Pid
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > > > >
> > > > > > > Brick 10.32.0.48:
> /opt/lvmdir/c2/brick
> > > 49169 0
> > > > > Y
> > > > > > > 2427
> > > > > > > Brick 10.32.1.144:
> /opt/lvmdir/c2/brick
> > > N/A N/A
> > > > > N
> > > > > > > N/A
> > > > > > > Self-heal Daemon on localhost
> > > N/A N/A
> > > > > Y
> > > > > > > 3388
> > > > > > > Self-heal Daemon on 10.32.1.144
> > > N/A N/A
> > > > > Y
> > > > > > > 3922
> > > > > > >
> > > > > > > Task Status of Volume c_glusterfs
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------
> > > > > > >
> > > > > > > There are no active volume tasks
> > > > > > >
> > > > > > > As you see in the "gluster volume
> > status"
> > > showing that
> > > > Brick
> > > > > > > "10.32.1.144:/opt/lvmdir/c2/brick
> " is
> > > offline so We have
> > > > > tried to
> > > > > > > remove it but getting "volume
> > remove-brick
> > > c_glusterfs
> > > > replica 1
> > > > > > > 10.32.1.144:/opt/lvmdir/c2/brick
> > force :
> > > FAILED :
> > > > Incorrect
> > > > > brick
> > > > > > > 10.32.1.144:/opt/lvmdir/c2/brick
> > for volume
> > > c_glusterfs"
> > > > > error on the
> > > > > > > Board A.
> > > > > > >
> > > > > > > Please reply on this post because
> I am
> > > always getting
> > > > this error
> > > > > > in this
> > > > > > > scenario.
> > > > > > >
> > > > > > > For more detail I am also adding
> > the logs of
> > > both of the
> > > > > board which
> > > > > > > having some manual created file in
> > which you
> > > can find the
> > > > > output of
> > > > > > > glulster command from both of the
> > boards
> > > > > > >
> > > > > > > in logs
> > > > > > > 00030 is board A
> > > > > > > 00250 is board B.
> > > > > > This attachment doesn't help much.
> > Could you
> > > attach full
> > > > > glusterd log
> > > > > > files from both the nodes?
> > > > > > >
> > > > > >
> > > > > > inside this attachment you will found
> full
> > > glusterd log file
> > > > > > 00300/glusterd/ and 002500/glusterd/
> > > > > No, that contains the configuration files.
> > > > > >
> > > > > > > Thanks in advance waiting for the
> > reply.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Abhishek
> > > > > > >
> > > > > > >
> > > > > > > Regards
> > > > > > > Abhishek Paliwal
> > > > > > >
> > > > > > >
> > > > > > >
> > _______________________________________________
> > > > > > > Gluster-devel mailing list
> > > > > > > Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>
> > > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>
> > > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>>>
> > > > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>
> > > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>
> > > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>
> > > <mailto:Gluster-devel at gluster.org
> > <mailto:Gluster-devel at gluster.org>>>>>
> > > > > > >
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Regards
> > > > > > Abhishek Paliwal
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Regards
> > > > > Abhishek Paliwal
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > >
> > > >
> > > > Regards
> > > > Abhishek Paliwal
> > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > >
> > >
> > > Regards
> > > Abhishek Paliwal
> > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > >
> > >
> > > Regards
> > > Abhishek Paliwal
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
>
--
Regards
Abhishek Paliwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160316/ef57e16e/attachment-0001.html>
More information about the Gluster-devel
mailing list