[Gluster-users] Gluster 6.8: some error messages during op-version-update

Hu Bert revirii at googlemail.com
Wed Apr 1 09:01:52 UTC 2020


Hi,

i just upgraded a test cluster from version 5.12 to 6.8; that went
fine, but iirc after setting the new op-version i saw some error
messages:

3 servers: becquerel, dirac, tesla
2 volumes:
workload, mounted on /shared/public
persistent, mounted on /shared/private

server becquerel, volume persistent:

[2020-04-01 08:36:29.029953] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317342] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341508] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.402862] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume
(persistent-write-behind) definition in line 308 unexpected
[2020-04-01 08:36:29.402924] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.402945] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.407428] E [MSGID: 101019]
[graph.y:352:graphyyerror] 0-parser: line 309: duplicate 'type'
defined for volume 'xlator_tree_free_memacct'
[2020-04-01 08:36:29.410943] E [MSGID: 101021]
[graph.y:363:graphyyerror] 0-parser: syntax error: line 309 (volume
'xlator_tree_free_memacct'): "performance/write-behind"
allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()

sever becquerel, volume workload:

[2020-04-01 08:36:29.029953] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317385] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341511] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.400282] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume (workdata-write-behind)
definition in line 308 unexpected
[2020-04-01 08:36:29.400338] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.400354] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2020-04-01 08:36:29
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.12
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x25c3f)[0x7facd212cc3f]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x323)[0x7facd2137163]
/lib/x86_64-linux-gnu/libc.so.6(+0x37840)[0x7facd1846840]
/lib/x86_64-linux-gnu/libc.so.6(+0x15c1a7)[0x7facd196b1a7]
/lib/x86_64-linux-gnu/libc.so.6(_IO_vfprintf+0x1fff)[0x7facd18609ef]
/lib/x86_64-linux-gnu/libc.so.6(__vasprintf_chk+0xc8)[0x7facd19190f8]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg+0x1b0)[0x7facd212dd40]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0xa1970)[0x7facd21a8970]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0xa1d86)[0x7facd21a8d86]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_graph_construct+0x344)[0x7facd21a9a24]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_volfile_reconfigure+0x30)[0x7facd2165cc0]
/usr/sbin/glusterfs(mgmt_getspec_cbk+0x2e1)[0x55a5e0a6de71]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xec60)[0x7facd20f7c60]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xefbf)[0x7facd20f7fbf]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7facd20f44e3]
/usr/lib/x86_64-linux-gnu/glusterfs/5.12/rpc-transport/socket.so(+0xbdb0)[0x7faccde83db0]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x83e7f)[0x7facd218ae7f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7facd1cc0fa3]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7facd19084cf]
---------

server tesla: nothing related
server dirac, log for mount volume persistent on /shared/private

[2020-04-01 08:36:29.029845] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317253] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341371] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.397448] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume
(persistent-write-behind) definition in line 2554 unexpected
[2020-04-01 08:36:29.397546] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.397567] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.403301] E [MSGID: 101021]
[graph.y:377:graphyyerror] 0-parser: syntax error in line 2555: "type"
(allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume')

[2020-04-01 08:36:29.407495] E [MSGID: 101021]
[graph.y:377:graphyyerror] 0-parser: syntax error in line 2555:
"performance/write-behind"
(allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume')

used command: gluster volume set all cluster.op-version 60000

Both volumes are up and accessable. Does this anything have to say?
Below the volume info (identical for both volumes).


Best regards,
Hubert

Volume Name: persistent
Type: Replicate
Volume ID: 1971fb67-3c21-4183-9c08-febc2c6237d0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: tesla:/gluster/lvpersistent/persistent
Brick2: becquerel:/gluster/lvpersistent/persistent
Brick3: dirac:/gluster/lvpersistent/persistent
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
server.outstanding-rpc-limit: 128
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.read-ahead: off
performance.io-cache: off
performance.quick-read: on
cluster.self-heal-window-size: 16
cluster.heal-wait-queue-length: 10000
cluster.data-self-heal-algorithm: full
cluster.background-self-heal-count: 256
network.inode-lru-limit: 200000
cluster.shd-max-threads: 8
transport.listen-backlog: 100
performance.least-prio-threads: 8
performance.cache-size: 6GB
cluster.min-free-disk: 1%
performance.io-thread-count: 32
performance.write-behind-window-size: 16MB
performance.cache-max-file-size: 128MB
client.event-threads: 8
server.event-threads: 8
performance.parallel-readdir: on
performance.cache-refresh-timeout: 4
cluster.readdir-optimize: off
performance.md-cache-timeout: 600
performance.nl-cache: off
cluster.lookup-unhashed: on
cluster.shd-wait-qlength: 10000
performance.readdir-ahead: on


More information about the Gluster-users mailing list