[Bugs] [Bug 1553778] New: /var/log/glusterfs/bricks/ export_vdb.log flooded with this error message "Not able to add to index [ Too many links]"
bugzilla at redhat.com
bugzilla at redhat.com
Fri Mar 9 14:18:47 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1553778
Bug ID: 1553778
Summary: /var/log/glusterfs/bricks/export_vdb.log flooded with
this error message "Not able to add to index [Too many
links]"
Product: GlusterFS
Version: 3.10
Component: glusterd
Severity: urgent
Assignee: bugs at gluster.org
Reporter: alexandrumarcu at gmail.com
CC: bugs at gluster.org
Description of problem:
I have just upgraded from 3.8.15 to 3.10.11 ( after another bug was fixed - Bug
1544461 ). Everything was fine for a while, in think when i added a new server
(replicate) to the pool and checking the log files, i saw
/var/log/glusterfs/bricks/export_vdb.log flooded with the following error
message:
[2018-03-09 12:57:19.544372] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/4f8ed955-6a22-4311-baf6-9e38088dbabc:
Not able to add to index [Too many links]
[2018-03-09 12:57:19.544810] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4:
Not able to add to index [Too many links]
[2018-03-09 12:57:19.545229] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/4a451bd4-5992-4896-94ee-94cf6c5b17d4:
Not able to add to index [Too many links]
This error appears only on the existing servers/bricks, the newly created one
is does not have these errors. ( the synch is now still in progress )
I am using Ubuntu 14, 5 x replicated cluster, and i am using ext4. ( i read
https://github.com/gluster/glusterfs/issues/132 )
Version-Release number of selected component (if applicable):
Old: 3.8.15
New: 3.10.11
How reproducible:
Steps to Reproduce:
1. Existing 4 x replicated Gluster cluster of 3.8.15
2. Upgrading those to 3.10.11 and clients and op version
3. Add new brick/server to the pool (2-gls-dus21-ci-efood-real-de) => 5 x
replicated Gluster cluster
4. /var/log/glusterfs/bricks/export_vdb.log full of errors: Not able to add to
index [Too many links]
Actual results:
/var/log/glusterfs/bricks/export_vdb.log log is flooded with errors only for
existing bricks:
[2018-03-09 14:05:18.183135] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/7d4c9f23-75c3-4162-a315-df52ef878d60:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.514930] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.515105] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/bb30cd15-d86f-478b-94da-91cffab732b7:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.646843] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.647213] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/bea06bc5-cc7b-4a50-801a-9bef757e5364:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.675816] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11:
Not able to add to index [Too many links]
[2018-03-09 14:05:18.676011] E [MSGID: 138003] [index.c:610:index_link_to_base]
0-gluster_volume-index:
/export_vdb/.glusterfs/indices/xattrop/95860b45-8976-4a01-a818-3dc70d507c11:
Not able to add to index [Too many links]
Expected results:.
newly added gluster to synch and no error flooding into the
/var/log/glusterfs/bricks/export_vdb.log
Additional info:
Status of volume: gluster_volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 2-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb 49153 0 Y 24166
Brick 1-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb 49153 0 Y 3364
Brick 1-gls-dus21-ci-efood-real-de:/export_
vdb 49153 0 Y 30337
Brick 3-gls-dus10-ci-efood-real-de.openstac
k.local:/export_vdb 49153 0 Y 3223
Brick 2-gls-dus21-ci-efood-real-de.openstac
klocal:/export_vdb 49152 0 Y 12426
Self-heal Daemon on localhost N/A N/A Y 21907
Self-heal Daemon on 1-gls-dus21-ci-efood-re
al-de.openstacklocal N/A N/A Y 16837
Self-heal Daemon on 2-gls-dus21-ci-efood-re
al-de.openstacklocal N/A N/A Y 17551
Self-heal Daemon on 1-gls-dus10-ci-efood-re
al-de.openstack.local N/A N/A Y 23096
Self-heal Daemon on 2-gls-dus10-ci-efood-re
al-de.openstack.local N/A N/A Y 10407
Task Status of Volume gluster_volume
------------------------------------------------------------------------------
There are no active volume tasks
root at 3-gls-dus10-ci-efood-real-de:/var/log/glusterfs/bricks# gluster volume
info
Volume Name: gluster_volume
Type: Replicate
Volume ID: 2e6bd6ba-37c8-4808-9156-08545cea3e3e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 5 = 5
Transport-type: tcp
Bricks:
Brick1: 2-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick2: 1-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick3: 1-gls-dus21-ci-efood-real-de:/export_vdb
Brick4: 3-gls-dus10-ci-efood-real-de.openstack.local:/export_vdb
Brick5: 2-gls-dus21-ci-efood-real-de.openstacklocal:/export_vdb
Options Reconfigured:
performance.io-thread-count: 32
cluster.self-heal-window-size: 64
performance.cache-max-file-size: 1MB
performance.cache-size: 2GB
nfs.disable: on
auth.allow:
10.96.214.95,10.97.177.128,10.96.214.103,10.96.214.101,10.97.177.122,10.97.177.127,10.96.215.197,10.96.215.201,10.97.177.132,10.97.177.124,10.96.214.93,10.97.177.139,10.96.214.119,10.97.177.106,10.96.210.69,10.96.214.94,10.97.177.118,10.97.177.145,10.96.214.98
performance.readdir-ahead: on
features.barrier: off
transport.address-family: inet
I have attached glusterfs log directory of an old existing server and of the
new server.PS I have restarted the service etc ... the old one has the upgrade
also. The new server is 2-gls-dus21-ci-efood-real-de.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list