[Gluster-users] Thin arbiter daemon on non-thin setup?
wkmail
wkmail at bneit.com
Wed Jun 19 00:25:38 UTC 2019
On a brand new Ubuntu 18 Gluster 6.2 replicate 3 arbiter 1 (normal
arbiter) setup.
glusterfs-server/bionic,now 6.2-ubuntu1~bionic1 amd64 [installed]
clustered file-system (server package)
Systemd is degraded and I show this this in systemctl listing
● gluster-ta-volume.service loaded failed failed GlusterFS,
Thin-arbiter process to maintain quorum for replica volume
systemctl status show this
● gluster-ta-volume.service - GlusterFS, Thin-arbiter process to
maintain quorum for replica volume
Loaded: loaded (/lib/systemd/system/gluster-ta-volume.service;
enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-06-16 12:36:15
PDT; 2 days ago
Process: 13020 ExecStart=/usr/sbin/glusterfsd -N --volfile-id ta-vol
-f /var/lib/glusterd/thin-arbiter/thin-arbiter.vol --brick-port 24007
--xlator-option ta-vol-server.transport.socket.listen-port=24007
(code=exited, status=255)
Main PID: 13020 (code=exited, status=255)
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]:
gluster-ta-volume.service: Service hold-off time over, scheduling restart.
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]:
gluster-ta-volume.service: Scheduled restart job, restart counter is at 5.
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: Stopped GlusterFS,
Thin-arbiter process to maintain quorum for replica volume.
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]:
gluster-ta-volume.service: Start request repeated too quickly.
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]:
gluster-ta-volume.service: Failed with result 'exit-code'.
Jun 16 12:36:15 onetest3.pixelgate.net systemd[1]: Failed to start
GlusterFS, Thin-arbiter process to maintain quorum for replica volume
Since I am not using Thin Arbiter, I am a little confused.
The Gluster setup itself seems fine and seems to work normally.
root at onetest2:/var/log/libvirt/qemu# gluster peer status
Number of Peers: 2
Hostname: onetest1.gluster
Uuid: 79dc67df-c606-42f8-bbee-f7e73c730eb8
State: Peer in Cluster (Connected)
Hostname: onetest3.gluster
Uuid: d4e3330b-eaac-4a54-ad2e-a0da1114ec09
State: Peer in Cluster (Connected)
root at onetest2:/var/log/libvirt/qemu# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 1a80b833-0850-4ddb-83fa-f36da2b7a8fc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: onetest2.gluster:/GLUSTER/gv0
Brick2: onetest3.gluster:/GLUSTER/gv0
Brick3: onetest1.gluster:/GLUSTER/gv0 (arbiter)
Thoughts?
Can I just disable remove that service?
Sincerely,
W Kern
More information about the Gluster-users
mailing list