[Gluster-users] glusterfs client crashes
Dj Merrill
gluster at deej.net
Sun Feb 21 19:23:09 UTC 2016
On 2/21/2016 1:27 PM, Gaurav Garg wrote:
> Its seems that your brick process are offline or all brick process have crashed. Could you paste output of #gluster volume status and #gluster volume info command and attach core file.
Very interesting. They were reporting both bricks offline, but the
processes on both servers were still running. Restarting glusterfsd on
one of the servers brought them both back online.
I am going to have to take a closer look at the logs on the servers.
Even after bringing them back up, the client is still reporting
"Transport endpoint is not connected". Is there anything other than a
reboot that will change this state on the client?
# gluster volume status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterfs1:/export/brick1/sdb1 49152 0 Y
15073
Brick glusterfs2:/export/brick1/sdb1 49152 0 Y
14068
Self-heal Daemon on localhost N/A N/A Y
14063
Self-heal Daemon on glusterfs1 N/A N/A Y
7732
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterfs1:/export/brick2/sdb2 49154 0 Y
15089
Brick glusterfs2:/export/brick2/sdb2 49157 0 Y
14073
Self-heal Daemon on localhost N/A N/A Y
14063
Self-heal Daemon on glusterfs1 N/A N/A Y
7732
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 1d31ea3c-a240-49fe-a68d-4218ac051b6d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/export/brick1/sdb1
Brick2: glusterfs2:/export/brick1/sdb1
Options Reconfigured:
performance.cache-max-file-size: 750MB
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.quota-timeout: 30
features.quota: off
performance.io-thread-count: 16
performance.write-behind-window-size: 1GB
performance.cache-size: 1GB
nfs.volume-access: read-only
nfs.disable: on
cluster.self-heal-daemon: enable
Volume Name: gv1
Type: Replicate
Volume ID: 7127b90b-e208-4aea-a920-4db195295d7a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/export/brick2/sdb2
Brick2: glusterfs2:/export/brick2/sdb2
Options Reconfigured:
performance.cache-size: 1GB
performance.write-behind-window-size: 1GB
nfs.disable: on
nfs.volume-access: read-only
performance.cache-max-file-size: 750MB
cluster.self-heal-daemon: enable
-Dj
More information about the Gluster-users
mailing list