[Gluster-users] gluster peer disconnected
Strahil Nikolov
hunter86_bg at yahoo.com
Thu Jul 30 20:14:32 UTC 2020
Is 'gluster pool list' consistent on all nodes?
Do you have all your bricks properly mounted on the affected node?
Bet Regards,
Strahil Nikolov
На 30 юли 2020 г. 20:22:18 GMT+03:00, Pat Haley <phaley at mit.edu> написа:
>
>Hi,
>
>We have a cluster whose common storage is a gluster volume consisting
>of
>4 bricks residing on 2 servers (more details at bottom). Yesterday we
>experienced a power outage. To start the gluster volume after the
>power
>came back I had to
>
> * manually start a gluster daemon on one of the servers (mseas-data3)
> * start the gluster volume on the other server (mseas-data2)
> o I had just tried starting the gluster volume without manually
> starting the other daemon but that was unsuccessful.
>
>After this my recollection is that the peers were talking to each other
>
>at that time.
>
>Today I was looking around and noticed that the mseas-data3 server is
>in
>a disconnected state (even though the compute nodes of our cluster are
>seeing the full gluster volume)
>
>-----------------------
>
>[root at mseas-data2 ~]# gluster peer status
>Number of Peers: 1
>
>Hostname: mseas-data3
>Uuid: b39d4deb-c291-437e-8013-09050c1fa9e3
>State: Peer in Cluster (Disconnected)
>
>-----------------------
>
>Following the advice on
>https://lists.gluster.org/pipermail/gluster-users/2015-April/021597.html
>
>, I confirmed that the 2 servers can ping each other. The gluster
>daemon on mseas-data2 is active but the daemon on mseas-data3 shows
>
>--------------------------------
>
>[root at mseas-data3 ~]# service glusterd status
>glusterd dead but pid file exists
>
>--------------------------------
>
>Is it safe to just restart that daemon on mseas-data3? Is there some
>other procedure I should do? I ask because we have a number of job
>running that appear to be successfully writing to the gluster volume
>and
>I'd prefer that they continue if possible.
>
>Any advice would be appreciated. Thanks
>
>---------------------------------------------------
>
>[root at mseas-data2 ~]# gluster volume info
>
>Volume Name: data-volume
>Type: Distribute
>Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
>Status: Started
>Number of Bricks: 4
>Transport-type: tcp
>Bricks:
>Brick1: mseas-data2:/mnt/brick1
>Brick2: mseas-data2:/mnt/brick2
>Brick3: mseas-data3:/export/sda/brick3
>Brick4: mseas-data3:/export/sdc/brick4
>Options Reconfigured:
>diagnostics.client-log-level: ERROR
>network.inode-lru-limit: 50000
>performance.md-cache-timeout: 60
>performance.open-behind: off
>disperse.eager-lock: off
>auth.allow: *
>server.allow-insecure: on
>nfs.exports-auth-enable: on
>diagnostics.brick-sys-log-level: WARNING
>performance.readdir-ahead: on
>nfs.disable: on
>nfs.export-volumes: off
>cluster.min-free-disk: 1%
More information about the Gluster-users
mailing list