[Gluster-users] cannot add server back to cluster after reinstallation

Atin Mukherjee atin.mukherjee83 at gmail.com
Wed Mar 27 10:37:42 UTC 2019


On Wed, 27 Mar 2019 at 16:02, Riccardo Murri <riccardo.murri at gmail.com>
wrote:

> Hello Atin,
>
> > Check cluster.op-version, peer status, volume status output. If they are
> all fine you’re good.
>
> Both `op-version` and `peer status` look fine:
> ```
> # gluster volume get all cluster.max-op-version
> Option                                  Value
> ------                                  -----
> cluster.max-op-version                  31202
>
> # gluster peer status
> Number of Peers: 4
>
> Hostname: glusterfs-server-004
> Uuid: 9a5763d2-1941-4e5d-8d33-8d6756f7f318
> State: Peer in Cluster (Connected)
>
> Hostname: glusterfs-server-005
> Uuid: d53398f6-19d4-4633-8bc3-e493dac41789
> State: Peer in Cluster (Connected)
>
> Hostname: glusterfs-server-003
> Uuid: 3c74d2b4-a4f3-42d4-9511-f6174b0a641d
> State: Peer in Cluster (Connected)
>
> Hostname: glusterfs-server-001
> Uuid: 60bcc47e-ccbe-493e-b4ea-d45d63123977
> State: Peer in Cluster (Connected)
> ```
>
> However, `volume status` shows a missing snapshotd on the reinstalled
> server (the 002 one).


I believe you ran this command on 002? And in that case its showing as
localhost.


> We're not using snapshots so I guess this is fine too?


Is features.uss enabled for this volume? Otherwise we don’t show snapd
information in status output.

Rafi - am I correct?


>
> ```
> # gluster volume status
> Status of volume: glusterfs
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
>
> ------------------------------------------------------------------------------
> Brick glusterfs-server-005:/s
> rv/glusterfs                                49152     0          Y
>  1410
> Brick glusterfs-server-004:/s
> rv/glusterfs                                49152     0          Y
>  1416
> Brick glusterfs-server-003:/s
> rv/glusterfs                                49152     0          Y
>  1520
> Brick glusterfs-server-001:/s
> rv/glusterfs                                49152     0          Y
>  1266
> Brick glusterfs-server-002:/s
> rv/glusterfs                                49152     0          Y
>  3011
> Snapshot Daemon on localhost                N/A       N/A        Y
>  3029
> Snapshot Daemon on glusterfs-
> server-001                                  49153     0          Y
>  1361
> Snapshot Daemon on glusterfs-
> server-005                                  49153     0          Y
>  1478
> Snapshot Daemon on glusterfs-
> server-004                                  49153     0          Y
>  1490
> Snapshot Daemon on glusterfs-
> server-003                                  49153     0          Y
>  1563
>
> Task Status of Volume glusterfs
>
> ------------------------------------------------------------------------------
> Task                 : Rebalance
> ID                   : 0eaf6ad1-df95-48f4-b941-17488010ddcc
> Status               : failed
> ```
>
> Thanks,
> Riccardo
>
-- 
--Atin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190327/c6e135c4/attachment.html>


More information about the Gluster-users mailing list