[Bugs] [Bug 1633669] Gluster bricks fails frequently
bugzilla at redhat.com
bugzilla at redhat.com
Fri Oct 5 08:42:35 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1633669
--- Comment #8 from Jaime Dulzura <jaime.dulzura at cevalogistics.com> ---
Created attachment 1490778
--> https://bugzilla.redhat.com/attachment.cgi?id=1490778&action=edit
Latest brick process down logs.
Status of failing brick:
[root at iahdvlgfsc001 cevaroot]# gluster v status CL_Shared
Status of volume: CL_Shared
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick iahdvlgfsa001:/local/bricks/volume02/
CL_Shared 49152 0 Y 4890
Brick iahdvlgfsb001:/local/bricks/volume02/
CL_Shared 49152 0 Y 1021
Brick iahdvlgfsc001:/local/bricks/volume02/
CL_Shared N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 20211
Self-heal Daemon on iahdvlgfsa001.logistics
.corp N/A N/A Y 32017
Self-heal Daemon on iahdvlgfsb001 N/A N/A Y 1068
Task Status of Volume CL_Shared
------------------------------------------------------------------------------
There are no active volume tasks
glusterd status
[root at iahdvlgfsc001 cevaroot]# systemctl status glusterd -l
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor
preset: disabled)
Active: active (running) since Fri 2018-10-05 00:43:34 CDT; 2h 53min ago
Process: 1324 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
--log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 1332 (glusterd)
CGroup: /system.slice/glusterd.service
├─ 1332 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
├─ 6873 /usr/sbin/glusterfsd -s iahdvlgfsc001 --volfile-id
tibco.iahdvlgfsc001.local-bricks-volume01-tibco -p
/var/run/gluster/vols/tibco/iahdvlgfsc001-local-bricks-volume01-tibco.pid -S
/var/run/gluster/843d10f6ac486e3e.socket --brick-name
/local/bricks/volume01/tibco -l
/var/log/glusterfs/bricks/local-bricks-volume01-tibco.log --xlator-option
*-posix.glusterd-uuid=6af863cd-43f6-448e-936d-889766c1a655 --process-name brick
--brick-port 49153 --xlator-option tibco-server.listen-port=49153
└─20211 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S /var/run/gluster/8abfe66e3fb78dec.socket
--xlator-option *replicate*.node-uuid=6af863cd-43f6-448e-936d-889766c1a655
--process-name glustershd
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: dlfcn 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: libpthread 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: llistxattr 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: setfsid 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: spinlock 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: epoll.h 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: xattr.h 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: st_atim.tv_nsec 1
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: package-string: glusterfs 4.1.5
Oct 05 03:15:11 iahdvlgfsc001.logistics.corp
local-bricks-volume02-CL_Shared[20186]: ---------
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list