[Gluster-users] Glusterd not working with systemd in redhat 7

Atin Mukherjee amukherj at redhat.com
Thu Oct 5 10:08:21 UTC 2017


So I have the root cause. Basically as part of the patch we write the
brickinfo->uuid in to the brickinfo file only when there is a change in the
volume. As per the brickinfo files you shared the uuid was not saved as
there is no new change in the volume and hence the uuid was always NULL in
the resolve brick because of which glusterd went for local address
resolution. Having this done with a new op-version could have been a better
choice here.

As a work around, you can toggle some volume options for all the volumes
and then retry the same or if your cluster.op-version is not up to date,
you can bump up the op-version to latest which will take care of writing
the uuid in the brickinfo file.

On Thu, Oct 5, 2017 at 1:52 PM, ismael mondiu <mondiu at hotmail.com> wrote:

> Hello Atin,
>
> Please find below the requested informations:
>
>
> [root at dvihcasc0r ~]# cat /var/lib/glusterd/vols/advdemo/bricks/*
> hostname=dvihcasc0r
> path=/opt/glusterfs/advdemo
> real_path=/opt/glusterfs/advdemo
> listen-port=49152
> rdma.listen-port=0
> decommissioned=0
> brick-id=advdemo-client-0
> mount_dir=/advdemo
> snap-status=0
> hostname=dvihcasc0s
> path=/opt/glusterfs/advdemo
> real_path=/opt/glusterfs/advdemo
> listen-port=0
> rdma.listen-port=0
> decommissioned=0
> brick-id=advdemo-client-1
> mount_dir=/advdemo
> snap-status=0
> hostname=dvihcasc0t
> path=/opt/glusterfs/advdemo
> real_path=/opt/glusterfs/advdemo
> listen-port=0
> rdma.listen-port=0
> decommissioned=0
> brick-id=advdemo-client-2
> mount_dir=/advdemo
> snap-status=0
>
>
> ************************************************************
> **************************************
>
>
> [root at dvihcasc0r ~]# gluster volume info
>
> Volume Name: advdemo
> Type: Replicate
> Volume ID: 953f610c-105e-4931-af4c-0105480c4573
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: dvihcasc0r:/opt/glusterfs/advdemo
> Brick2: dvihcasc0s:/opt/glusterfs/advdemo
> Brick3: dvihcasc0t:/opt/glusterfs/advdemo (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> server.event-threads: 2
>
>
> ************************************************************
> ************************
>
> [
> [root at dvihcasc0r ~]# glusterd -L DEBUG
> [root at dvihcasc0r ~]# date
> Thu Oct  5 10:05:44 CEST 2017
> [root at dvihcasc0r ~]# shutdown -r now
>
>
> ************************************************************
> *****************************
>
>
> Please find the glusterd.log file attached to this mail.
>
>
> Thanks
>
>
>
>
>
> ------------------------------
> *De :* Atin Mukherjee <amukherj at redhat.com>
> *Envoyé :* jeudi 5 octobre 2017 06:00
>
> *À :* ismael mondiu
> *Cc :* Niels de Vos; gluster-users at gluster.org; Gaurav Yadav
> *Objet :* Re: [Gluster-users] Glusterd not working with systemd in redhat
> 7
>
>
>
> On Wed, Oct 4, 2017 at 9:26 PM, ismael mondiu <mondiu at hotmail.com> wrote:
>
>> Hello ,
>>
>> it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3
>> configuration. I upgraded the first server and then  launched a reboot.
>>
>>
>> Gluster is not starting.  Seems that gluster starts before network layer.
>>
>> Some logs here:
>>
>>
>> Thanks
>>
>>
>> [2017-10-04 15:33:00.506396] I [MSGID: 106143]
>> [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick
>> /opt/glusterfs/advdemo on port 49152
>> [2017-10-04 15:33:01.206401] I [MSGID: 106488]
>> [glusterd-handler.c:1538:__glusterd_handle_cli_get_volume] 0-management:
>> Received get vol req
>> [2017-10-04 15:33:01.206936] I [MSGID: 106488]
>> [glusterd-handler.c:1538:__glusterd_handle_cli_get_volume] 0-management:
>> Received get vol req
>> [2017-10-04 15:33:18.043104] W [glusterfsd.c:1360:cleanup_and_exit]
>> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7fc47d9e4dc5]
>> -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x7fc47f07f135]
>> -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7fc47f07ef5b] ) 0-:
>> received signum (15), shutting down
>> [2017-10-04 15:44:19.422240] I [MSGID: 100030] [glusterfsd.c:2503:main]
>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.10.6
>> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
>> [2017-10-04 15:44:19.662481] I [MSGID: 106478] [glusterd.c:1449:init]
>> 0-management: Maximum allowed open file descriptors set to 65536
>> [2017-10-04 15:44:19.662538] I [MSGID: 106479] [glusterd.c:1496:init]
>> 0-management: Using /var/lib/glusterd as working directory
>> [2017-10-04 15:44:19.676736] E [rpc-transport.c:283:rpc_transport_load]
>> 0-rpc-transport: /usr/lib64/glusterfs/3.10.6/rpc-transport/rdma.so:
>> cannot open shared object file: No such file or directory
>> [2017-10-04 15:44:19.676763] W [rpc-transport.c:287:rpc_transport_load]
>> 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not
>> valid or not found on this machine
>> [2017-10-04 15:44:19.676792] W [rpcsvc.c:1661:rpcsvc_create_listener]
>> 0-rpc-service: cannot create listener, initing the transport failed
>> [2017-10-04 15:44:19.676803] E [MSGID: 106243] [glusterd.c:1720:init]
>> 0-management: creation of 1 listeners failed, continuing with succeeded
>> transport
>> [2017-10-04 15:44:19.688996] I [MSGID: 106228]
>> [glusterd.c:500:glusterd_check_gsync_present] 0-glusterd:
>> geo-replication module not installed in the system [No such file or
>> directory]
>> [2017-10-04 15:44:19.692198] I [MSGID: 106513]
>> [glusterd-store.c:2201:glusterd_restore_op_version] 0-glusterd:
>> retrieved op-version: 31004
>> [2017-10-04 15:44:20.157648] I [MSGID: 106498]
>> [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo]
>> 0-management: connect returned 0
>> The message "I [MSGID: 106498] [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo]
>> 0-management: connect returned 0" repeated 4 times between [2017-10-04
>> 15:44:20.157648] and [2017-10-04 15:44:20.168269]
>> [2017-10-04 15:44:20.168321] W [MSGID: 106062]
>> [glusterd-handler.c:3466:glusterd_transport_inet_options_build]
>> 0-glusterd: Failed to get tcp-user-timeout
>> [2017-10-04 15:44:20.168362] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-10-04 15:44:20.176335] E [socket.c:3230:socket_connect]
>> 0-management: connection attempt on  failed, (Network is unreachable)
>> [2017-10-04 15:44:20.176389] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-10-04 15:44:20.179957] E [socket.c:3230:socket_connect]
>> 0-management: connection attempt on  failed, (Network is unreachable)
>> [2017-10-04 15:44:20.179995] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-10-04 15:44:20.182592] E [socket.c:3230:socket_connect]
>> 0-management: connection attempt on  failed, (Network is unreachable)
>>
>> [2017-10-04 15:44:20.182633] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-10-04 15:44:20.185507] E [socket.c:3230:socket_connect]
>> 0-management: connection attempt on  failed, (Network is unreachable)
>> [2017-10-04 15:44:20.185541] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
>> 0-management: setting frame-timeout to 600
>> [2017-10-04 15:44:20.188022] E [socket.c:3230:socket_connect]
>> 0-management: connection attempt on  failed, (Network is unreachable)
>> The message "W [MSGID: 106062] [glusterd-handler.c:3466:glust
>> erd_transport_inet_options_build] 0-glusterd: Failed to get
>> tcp-user-timeout" repeated 4 times between [2017-10-04 15:44:20.168321] and
>> [2017-10-04 15:44:20.185536]
>> [2017-10-04 15:44:20.188517] I [MSGID: 106544]
>> [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID:
>> 29f246da-71b1-4c60-a8c6-e6291f3f8cce
>> [2017-10-04 15:44:20.189091] E [MSGID: 106187]
>> [glusterd-store.c:4566:glusterd_resolve_all_bricks] 0-glusterd: resolve
>> brick failed in restore
>>
>
> The fix had been done in a way that while resolving the bricks, we just
> match them through their peer's UUIDs. If we fail to match through the
> UUIDs & hostname then we go for local address resolution. In this case it
> seems like the UUID matching failed. To confirm this, could you first
> change the default log level of glusterd to DEBUG by modifying
> Environment="LOG_LEVEL=INFO" to Environment="LOG_LEVEL=DEBUG" and then
> reboot the node? This will help us to identify for which particular brick
> (and the volume) this had failed and then I'd like to see the output of
> "cat /var/lib/glusterd/vols/<volname>/bricks/*
>
> Please also provide the gluster volume info output.
>
>> [2017-10-04 15:44:20.189123] E [MSGID: 101019] [xlator.c:503:xlator_init]
>> 0-management: Initialization of volume 'management' failed, review your
>> volfile again
>> [2017-10-04 15:44:20.189139] E [MSGID: 101066]
>> [graph.c:325:glusterfs_graph_init] 0-management: initializing translator
>> failed
>> [2017-10-04 15:44:20.189153] E [MSGID: 101176]
>> [graph.c:681:glusterfs_graph_activate] 0-graph: init failed
>> [2017-10-04 15:44:20.190877] W [glusterfsd.c:1360:cleanup_and_exit]
>> (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f2b04693bcd]
>> -->/usr/sbin/glusterd(glusterfs_process_volfp+0x1b1) [0x7f2b04693a71]
>> -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f2b04692f5b] ) 0-:
>> received signum (1), shutting down
>>
>>
>>
>>
>>
>> ------------------------------
>> *De :* Niels de Vos <ndevos at redhat.com>
>> *Envoyé :* mercredi 4 octobre 2017 14:47
>> *À :* ismael mondiu
>> *Cc :* Atin Mukherjee; gluster-users at gluster.org; Gaurav Yadav
>> *Objet :* Re: [Gluster-users] Glusterd not working with systemd in
>> redhat 7
>>
>> On Wed, Oct 04, 2017 at 12:17:23PM +0000, ismael mondiu wrote:
>> >
>> > Thanks Niels,
>> >
>> > We want to install it on redhat 7. We work on a secured environment
>> > with no internet access.
>> >
>> > We download the packages here
>> > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and
>>
>> Index of /centos/7/storage/x86_64/gluster-3.10
>> <https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/>
>> buildlogs.centos.org
>> Parent Directory - gdeploy-2.0.2-14.el7.noarch.rpm: 2017-09-05 11:40 :
>> 190K : glusterfs-3.10.0-1.el7.x86_64.rpm: 2017-02-24 18:10 : 525K :
>> glusterfs-3.10.1-1.el7.x86 ...
>>
>>
>>
>> > then, we push the package to the server and install them via  rpm
>> > command .
>> >
>> > Do you think this is a correct way to upgrade gluster when working
>> > without internet access?
>>
>> Yes, you can do it that way as well. There should be no new dependencies
>> compared to previous 3.10 versions. Upgrading all the glusterfs packages
>> on an existing system should not give you any (rpm) problems.
>>
>> Niels
>>
>>
>> >
>> > Thanks in advance
>> >
>> >
>> >
>> >
>> >
>> > ________________________________
>> > De : Niels de Vos <ndevos at redhat.com>
>> > Envoyé : mercredi 4 octobre 2017 12:17
>> > À : ismael mondiu
>> > Cc : Atin Mukherjee; gluster-users at gluster.org; Gaurav Yadav
>> > Objet : Re: [Gluster-users] Glusterd not working with systemd in redhat
>> 7
>> >
>> > On Wed, Oct 04, 2017 at 09:44:44AM +0000, ismael mondiu wrote:
>> > > Hello,
>> > >
>> > > I'd like to test if 3.10.6 version fixes the problem  . I'm wondering
>> which is the correct way to upgrade from 3.10.5 to 3.10.6.
>> > >
>> > > It's hard to find upgrade guides for a minor release. Can you help me
>> please ?
>> >
>> > Packages for GlusterFS 3.10.6 are available in the testing repository of
>> > the CentOS Storage SIG. In order to test these packages on a CentOS 7
>> > system, follow these steps:
>> >
>> >   # yum install centos-release-gluster310
>> >   # yum --enablerepo=centos-gluster310-test install
>> glusterfs-server-3.10.6-1.el7
>> >
>> > Make sure to restart any running Gluster binaries before running your
>> > tests.
>> >
>> > When someone reports back about the 3.10.6 release, and it is not worse
>> > than previous versions, I'll mark the packages stable so that they get
>> > sync'd to the CentOS mirrors the days afterwards.
>> >
>> > Thanks,
>> > Niels
>> >
>> >
>> >
>> > >
>> > >
>> > > Thanks in advance
>> > >
>> > >
>> > > Ismael
>> > >
>> > >
>> > > ________________________________
>> > > De : Atin Mukherjee <amukherj at redhat.com>
>> > > Envoyé : dimanche 17 septembre 2017 14:56
>> > > À : ismael mondiu
>> > > Cc : Niels de Vos; gluster-users at gluster.org; Gaurav Yadav
>> > > Objet : Re: [Gluster-users] Glusterd not working with systemd in
>> redhat 7
>> > >
>> > > The backport just got merged few minutes back and this fix should be
>> available in next update of 3.10.
>> > >
>> > > On Fri, Sep 15, 2017 at 2:08 PM, ismael mondiu <mondiu at hotmail.com
>> <mailto:mondiu at hotmail.com>> wrote:
>> > >
>> > > Hello Team,
>> > >
>> > > Do you know when the backport to 3.10 will be available ?
>> > >
>> > > Thanks
>> > >
>> > >
>> > >
>> > > ________________________________
>> > > De : Atin Mukherjee <amukherj at redhat.com<mailto:amukherj at redhat.com>>
>> > > Envoyé : vendredi 18 août 2017 10:53
>> > > À : Niels de Vos
>> > > Cc : ismael mondiu; gluster-users at gluster.org<mailto:
>> gluster-users at gluster.org>; Gaurav Yadav
>> > > Objet : Re: [Gluster-users] Glusterd not working with systemd in
>> redhat 7
>> > >
>> > >
>> > >
>> > > On Fri, Aug 18, 2017 at 2:01 PM, Niels de Vos <ndevos at redhat.com
>> <mailto:ndevos at redhat.com>> wrote:
>> > > On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote:
>> > > > You're hitting a race here. By the time glusterd tries to resolve
>> the
>> > > > address of one of the remote bricks of a particular volume, the n/w
>> > > > interface is not up by that time. We have fixed this issue in
>> mainline and
>> > > > 3.12 branch through the following commit:
>> > >
>> > > We still maintain 3.10 for at least 6 months. It probably makes sense
>> to
>> > > backport this? I would not bother with 3.8 though, the last update for
>> > > this version has already been shipped.
>> > >
>> > > Agreed. Gaurav is backporting the fix in 3.10 now.
>> > >
>> > >
>> > > Thanks,
>> > > Niels
>> > >
>> > >
>> > > >
>> > > > commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a
>> > > > Author: Gaurav Yadav <gyadav at redhat.com<mailto:gyadav at redhat.com>>
>> > > > Date:   Tue Jul 18 16:23:18 2017 +0530
>> > > >
>> > > >     glusterd : glusterd fails to start when  peer's network
>> interface is
>> > > > down
>> > > >
>> > > >     Problem:
>> > > >     glusterd fails to start on nodes where glusterd tries to come
>> up even
>> > > >     before network is up.
>> > > >
>> > > >     Fix:
>> > > >     On startup glusterd tries to resolve brick path which is based
>> on
>> > > >     hostname/ip, but in the above scenario when network interface
>> is not
>> > > >     up, glusterd is not able to resolve the brick path using
>> ip_address or
>> > > >     hostname With this fix glusterd will use UUID to resolve brick
>> path.
>> > > >
>> > > >     Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
>> > > >     BUG: 1472267
>> > > >     Signed-off-by: Gaurav Yadav <gyadav at redhat.com<mailto:gyad
>> av at redhat.com>>
>> > > >     Reviewed-on: https://review.gluster.org/17813
>> > > >     Smoke: Gluster Build System <jenkins at build.gluster.org<mailto:
>> jenkins at build.gluster.org>>
>> > > >     Reviewed-by: Prashanth Pai <ppai at redhat.com<mailto:ppai at r
>> edhat.com>>
>> > > >     CentOS-regression: Gluster Build System <
>> jenkins at build.gluster.org<mailto:jenkins at build.gluster.org>>
>> > > >     Reviewed-by: Atin Mukherjee <amukherj at redhat.com<mailto:am
>> ukherj at redhat.com>>
>> > > >
>> > > >
>> > > >
>> > > > Note : 3.12 release is planned by end of this month.
>> > > >
>> > > > ~Atin
>> > > >
>> > > > On Thu, Aug 17, 2017 at 2:45 PM, ismael mondiu <mondiu at hotmail.com
>> <mailto:mondiu at hotmail.com>> wrote:
>> > > >
>> > > > > Hi Team,
>> > > > >
>> > > > > I noticed that glusterd is never starting when i reboot my Redhat
>> 7.1
>> > > > > server.
>> > > > >
>> > > > > Service is enabled but don't works.
>> > > > >
>> > > > > I tested with gluster 3.10.4 & gluster 3.10.5 and the problem
>> still exists.
>> > > > >
>> > > > >
>> > > > > When i started the service manually this works.
>> > > > >
>> > > > > I'va also tested on Redhat 6.6 server and gluster 3.10.4 and this
>> works
>> > > > > fine.
>> > > > >
>> > > > > The problem seems to be related to Redhat 7.1
>> > > > >
>> > > > >
>> > > > > This is à known issue ? if yes , can you tell me what's is the
>> workaround?
>> > > > >
>> > > > >
>> > > > > Thanks
>> > > > >
>> > > > >
>> > > > > Some logs here
>> > > > >
>> > > > >
>> > > > > [root@~]# systemctl status  glusterd
>> > > > > ● glusterd.service - GlusterFS, a clustered file-system server
>> > > > >    Loaded: loaded (/usr/lib/systemd/system/glusterd.service;
>> enabled;
>> > > > > vendor preset: disabled)
>> > > > >    Active: failed (Result: exit-code) since Thu 2017-08-17
>> 11:04:00 CEST;
>> > > > > 2min 9s ago
>> > > > >   Process: 851 ExecStart=/usr/sbin/glusterd -p
>> /var/run/glusterd.pid
>> > > > > --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited,
>> status=1/FAILURE)
>> > > > >
>> > > > > Aug 17 11:03:59 dvihcasc0r systemd[1]: Starting GlusterFS, a
>> clustered
>> > > > > file-system server...
>> > > > > Aug 17 11:04:00 dvihcasc0r systemd[1]: glusterd.service: control
>> process
>> > > > > exited, code=exited status=1
>> > > > > Aug 17 11:04:00 dvihcasc0r systemd[1]: Failed to start GlusterFS,
>> a
>> > > > > clustered file-system server.
>> > > > > Aug 17 11:04:00 dvihcasc0r systemd[1]: Unit glusterd.service
>> entered
>> > > > > failed state.
>> > > > > Aug 17 11:04:00 dvihcasc0r systemd[1]: glusterd.service failed.
>> > > > >
>> > > > >
>> > > > > ************************************************************
>> > > > > ****************************
>> > > > >
>> > > > >  /var/log/glusterfs/glusterd.log
>> > > > >
>> > > > > ************************************************************
>> > > > > ********************************
>> > > > >
>> > > > >
>> > > > > 2017-08-17 09:04:00.202529] I [MSGID: 106478]
>> [glusterd.c:1449:init]
>> > > > > 0-management: Maximum allowed open file descriptors set to 65536
>> > > > > [2017-08-17 09:04:00.202573] I [MSGID: 106479]
>> [glusterd.c:1496:init]
>> > > > > 0-management: Using /var/lib/glusterd as working directory
>> > > > > [2017-08-17 09:04:00.365134] E [rpc-transport.c:283:rpc_trans
>> port_load]
>> > > > > 0-rpc-transport: /usr/lib64/glusterfs/3.10.5/rp
>> c-transport/rdma.so:
>> > > > > cannot open shared object file: No such file or directory
>> > > > > [2017-08-17 09:04:00.365161] W [rpc-transport.c:287:rpc_trans
>> port_load]
>> > > > > 0-rpc-transport: volume 'rdma.management': transport-type 'rdma'
>> is not
>> > > > > valid or not found on this machine
>> > > > > [2017-08-17 09:04:00.365195] W [rpcsvc.c:1661:rpcsvc_create_l
>> istener]
>> > > > > 0-rpc-service: cannot create listener, initing the transport
>> failed
>> > > > > [2017-08-17 09:04:00.365206] E [MSGID: 106243]
>> [glusterd.c:1720:init]
>> > > > > 0-management: creation of 1 listeners failed, continuing with
>> succeeded
>> > > > > transport
>> > > > > [2017-08-17 09:04:00.464314] I [MSGID: 106228]
>> [glusterd.c:500:glusterd_check_gsync_present]
>> > > > > 0-glusterd: geo-replication module not installed in the system
>> [No such
>> > > > > file or directory]
>> > > > > [2017-08-17 09:04:00.510412] I [MSGID: 106513]
>> [glusterd-store.c:2197:glusterd_restore_op_version]
>> > > > > 0-glusterd: retrieved op-version: 31004
>> > > > > [2017-08-17 09:04:00.711413] I [MSGID: 106194]
>> [glusterd-store.c:3776:
>> > > > > glusterd_store_retrieve_missed_snaps_list] 0-management: No
>> missed snaps
>> > > > > list.
>> > > > > [2017-08-17 09:04:00.756731] E [MSGID: 106187]
>> [glusterd-store.c:4559:glusterd_resolve_all_bricks]
>> > > > > 0-glusterd: resolve brick failed in restore
>> > > > > [2017-08-17 09:04:00.756787] E [MSGID: 101019]
>> [xlator.c:503:xlator_init]
>> > > > > 0-management: Initialization of volume 'management' failed,
>> review your
>> > > > > volfile again
>> > > > > [2017-08-17 09:04:00.756802] E [MSGID: 101066]
>> > > > > [graph.c:325:glusterfs_graph_init] 0-management: initializing
>> translator
>> > > > > failed
>> > > > > [2017-08-17 09:04:00.756816] E [MSGID: 101176]
>> > > > > [graph.c:681:glusterfs_graph_activate] 0-graph: init failed
>> > > > > [2017-08-17 09:04:00.766584] W [glusterfsd.c:1332:cleanup_and
>> _exit]
>> > > > > (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd)
>> [0x7f9bdef4cabd]
>> > > > > -->/usr/sbin/glusterd(glusterfs_process_volfp+0x1b1)
>> [0x7f9bdef4c961]
>> > > > > -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x7f9bdef4be4b] )
>> 0-:
>> > > > > received signum (1), shutting down
>> > > > >
>> > > > > ************************************************************
>> > > > > ******************************
>> > > > >
>> > > > > [root@~]# uptime
>> > > > >  11:13:55 up 10 min,  1 user,  load average: 0.00, 0.02, 0.04
>> > > > >
>> > > > >
>> > > > > ************************************************************
>> > > > > ******************************
>> > > > >
>> > > > >
>> > > > >
>> > > > > _______________________________________________
>> > > > > Gluster-users mailing list
>> > > > > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>> > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> Gluster-users Info Page
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>> lists.gluster.org
>> Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>>
>>
>>
>> >
>> > Gluster-users Info Page<http://lists.gluster.org/
>> mailman/listinfo/gluster-users>
>>
>> Gluster-users Info Page
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>> lists.gluster.org
>> Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>>
>>
>>
>> > lists.gluster.org
>> > Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>> >
>> >
>> >
>> > > > >
>> > >
>> > > > _______________________________________________
>> > > > Gluster-users mailing list
>> > > > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>> > > > http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> Gluster-users Info Page
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>> lists.gluster.org
>> Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>>
>>
>>
>> >
>> > Gluster-users Info Page<http://lists.gluster.org/
>> mailman/listinfo/gluster-users>
>>
>> Gluster-users Info Page
>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>> lists.gluster.org
>> Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>>
>>
>>
>> > lists.gluster.org
>> > Gluster Users Mailing List. To see the collection of prior postings to
>> the list, visit the Gluster-users Archives. Using Gluster-users
>> >
>> >
>> >
>> > >
>> > >
>> > >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171005/3829b6bc/attachment.html>


More information about the Gluster-users mailing list