[Gluster-users] The progress of brick always crash- GlusterFS 3.7.2

王庆勇 hpqyzj at 163.com
Thu Aug 20 03:37:57 UTC 2015


HI,all

         when I use Glusterfs 3.7.2 in my case, the progress of brick always
crash, then I find these errors in /var/log/messages:

 

Aug 20 09:32:26 localhost data10-gfs[4973]: pending frames:

Aug 20 09:32:26 localhost data10-gfs[4973]: patchset:
git://git.gluster.com/glusterfs.git

Aug 20 09:32:26 localhost data10-gfs[4973]: signal received: 11

Aug 20 09:32:26 localhost data10-gfs[4973]: time of crash: 

Aug 20 09:32:26 localhost data10-gfs[4973]: 2015-08-20 01:32:26

Aug 20 09:32:26 localhost data10-gfs[4973]: configuration details:

Aug 20 09:32:26 localhost data10-gfs[4973]: argp 1

Aug 20 09:32:26 localhost data10-gfs[4973]: backtrace 1

Aug 20 09:32:26 localhost data10-gfs[4973]: dlfcn 1

Aug 20 09:32:26 localhost data10-gfs[4973]: libpthread 1

Aug 20 09:32:26 localhost data10-gfs[4973]: llistxattr 1

Aug 20 09:32:26 localhost data10-gfs[4973]: setfsid 1

Aug 20 09:32:26 localhost data10-gfs[4973]: spinlock 1

Aug 20 09:32:26 localhost data10-gfs[4973]: epoll.h 1

Aug 20 09:32:26 localhost data10-gfs[4973]: xattr.h 1

Aug 20 09:32:26 localhost data10-gfs[4973]: st_atim.tv_nsec 1

Aug 20 09:32:26 localhost data10-gfs[4973]: package-string: glusterfs 3.7.2

Aug 20 09:32:26 localhost data10-gfs[4973]: ---------

Aug 20 09:32:29 localhost abrt[28344]: Saved core dump of pid 4973
(/usr/sbin/glusterfsd) to /var/spool/abrt/ccpp-2015-08-20-09:32:26-4973
(663728128 bytes)

Aug 20 09:32:29 localhost abrtd: Directory 'ccpp-2015-08-20-09:32:26-4973'
creation detected

Aug 20 09:32:29 localhost abrtd: Package 'glusterfs-fuse' isn't signed with
proper key

Aug 20 09:32:29 localhost abrtd: 'post-create' on
'/var/spool/abrt/ccpp-2015-08-20-09:32:26-4973' exited with 1

Aug 20 09:32:29 localhost abrtd: Deleting problem directory
'/var/spool/abrt/ccpp-2015-08-20-09:32:26-4973'

 

What cause this problem???

 

Below is my configuration information:

#gluster vol info

Volume Name: dnionvol

Type: Distributed-Replicate

Volume ID: 05f8c9bd-f72e-4235-ba28-ed1b5a5ee615

Status: Started

Number of Bricks: 48 x 2 = 96

Transport-type: tcp

Bricks:

Brick1: node1.glusterzj.com:/data1/gfs

Brick2: node5.glusterzj.com:/data1/gfs

Brick3: node2.glusterzj.com:/data1/gfs

Brick4: node6.glusterzj.com:/data1/gfs

Brick5: node3.glusterzj.com:/data1/gfs

Brick6: node7.glusterzj.com:/data1/gfs

Brick7: node4.glusterzj.com:/data1/gfs

Brick8: node8.glusterzj.com:/data1/gfs

Brick9: node1.glusterzj.com:/data2/gfs

Brick10: node5.glusterzj.com:/data2/gfs

Brick11: node2.glusterzj.com:/data2/gfs

Brick12: node6.glusterzj.com:/data2/gfs

Brick13: node3.glusterzj.com:/data2/gfs

Brick14: node7.glusterzj.com:/data2/gfs

Brick15: node4.glusterzj.com:/data2/gfs

Brick16: node8.glusterzj.com:/data2/gfs

Brick17: node1.glusterzj.com:/data3/gfs

Brick18: node5.glusterzj.com:/data3/gfs

Brick19: node2.glusterzj.com:/data3/gfs

Brick20: node6.glusterzj.com:/data3/gfs

Brick21: node3.glusterzj.com:/data3/gfs

Brick22: node7.glusterzj.com:/data3/gfs

Brick23: node4.glusterzj.com:/data3/gfs

Brick24: node8.glusterzj.com:/data3/gfs

Brick25: node1.glusterzj.com:/data4/gfs

Brick26: node5.glusterzj.com:/data4/gfs

Brick27: node2.glusterzj.com:/data4/gfs

Brick28: node6.glusterzj.com:/data4/gfs

Brick29: node3.glusterzj.com:/data4/gfs

Brick30: node7.glusterzj.com:/data4/gfs

Brick31: node4.glusterzj.com:/data4/gfs

Brick32: node8.glusterzj.com:/data4/gfs

Brick33: node1.glusterzj.com:/data5/gfs

Brick34: node5.glusterzj.com:/data5/gfs

Brick35: node2.glusterzj.com:/data5/gfs

Brick36: node6.glusterzj.com:/data5/gfs

Brick37: node3.glusterzj.com:/data5/gfs

Brick38: node7.glusterzj.com:/data5/gfs

Brick39: node4.glusterzj.com:/data5/gfs

Brick40: node8.glusterzj.com:/data5/gfs

Brick41: node1.glusterzj.com:/data6/gfs

Brick42: node5.glusterzj.com:/data6/gfs

Brick43: node2.glusterzj.com:/data6/gfs

Brick44: node6.glusterzj.com:/data6/gfs

Brick45: node3.glusterzj.com:/data6/gfs

Brick46: node7.glusterzj.com:/data6/gfs

Brick47: node4.glusterzj.com:/data6/gfs

Brick48: node8.glusterzj.com:/data6/gfs

Brick49: node1.glusterzj.com:/data7/gfs

Brick50: node5.glusterzj.com:/data7/gfs

Brick51: node2.glusterzj.com:/data7/gfs

Brick52: node6.glusterzj.com:/data7/gfs

Brick53: node3.glusterzj.com:/data7/gfs

Brick54: node7.glusterzj.com:/data7/gfs

Brick55: node4.glusterzj.com:/data7/gfs

Brick56: node8.glusterzj.com:/data7/gfs

Brick57: node1.glusterzj.com:/data8/gfs

Brick58: node5.glusterzj.com:/data8/gfs

Brick59: node2.glusterzj.com:/data8/gfs

Brick60: node6.glusterzj.com:/data8/gfs

Brick61: node3.glusterzj.com:/data8/gfs

Brick62: node7.glusterzj.com:/data8/gfs

Brick63: node4.glusterzj.com:/data8/gfs

Brick64: node8.glusterzj.com:/data8/gfs

Brick65: node1.glusterzj.com:/data9/gfs

Brick66: node5.glusterzj.com:/data9/gfs

Brick67: node2.glusterzj.com:/data9/gfs

Brick68: node6.glusterzj.com:/data9/gfs

Brick69: node3.glusterzj.com:/data9/gfs

Brick70: node7.glusterzj.com:/data9/gfs

Brick71: node4.glusterzj.com:/data9/gfs

Brick72: node8.glusterzj.com:/data9/gfs

Brick73: node1.glusterzj.com:/data10/gfs

Brick74: node5.glusterzj.com:/data10/gfs

Brick75: node2.glusterzj.com:/data10/gfs

Brick76: node6.glusterzj.com:/data10/gfs

Brick77: node3.glusterzj.com:/data10/gfs

Brick78: node7.glusterzj.com:/data10/gfs

Brick79: node4.glusterzj.com:/data10/gfs

Brick80: node8.glusterzj.com:/data10/gfs

Brick81: node1.glusterzj.com:/data11/gfs

Brick82: node5.glusterzj.com:/data11/gfs

Brick83: node2.glusterzj.com:/data11/gfs

Brick84: node6.glusterzj.com:/data11/gfs

Brick85: node3.glusterzj.com:/data11/gfs

Brick86: node7.glusterzj.com:/data11/gfs

Brick87: node4.glusterzj.com:/data11/gfs

Brick88: node8.glusterzj.com:/data11/gfs

Brick89: node1.glusterzj.com:/data12/gfs

Brick90: node5.glusterzj.com:/data12/gfs

Brick91: node2.glusterzj.com:/data12/gfs

Brick92: node6.glusterzj.com:/data12/gfs

Brick93: node3.glusterzj.com:/data12/gfs

Brick94: node7.glusterzj.com:/data12/gfs

Brick95: node4.glusterzj.com:/data12/gfs

Brick96: node8.glusterzj.com:/data12/gfs

Options Reconfigured:

diagnostics.brick-log-level: ERROR

storage.build-pgfid: on

server.allow-insecure: on

changelog.changelog: on

geo-replication.ignore-pid-check: on

geo-replication.indexing: on

features.quota: on

nfs.disable: true

features.inode-quota: on

features.quota-deem-statfs: off

features.default-soft-limit: 80%

features.quota-timeout: 5

performance.io-thread-count: 4

performance.cache-size: 1GB

performance.write-behind-window-size: 2MB

performance.write-behind: on

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150820/822205dd/attachment.html>


More information about the Gluster-users mailing list