[Bugs] [Bug 1702316] New: Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators
bugzilla at redhat.com
bugzilla at redhat.com
Tue Apr 23 13:37:10 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1702316
Bug ID: 1702316
Summary: Cannot upgrade 5.x volume to 6.1 because of unused
'crypt' and 'bd' xlators
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: core
Severity: medium
Assignee: bugs at gluster.org
Reporter: rob.dewit at coosto.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem: After upgrade from 5.3 to 6.1, gluster refuses to start
bricks that apparently have 'crypt' and 'bd' xlators. None of these have been
provided at creation and according to 'gluster get VOLUME all' they are not
used.
Version-Release number of selected component (if applicable): 6.1
[2019-04-23 10:36:44.325141] I [MSGID: 100030] [glusterfsd.c:2849:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 6.1 (args:
/usr/sbin/glusterd --pid-file=/run/glusterd.pid)
[2019-04-23 10:36:44.325505] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid
of current running process is 31705
[2019-04-23 10:36:44.327314] I [MSGID: 106478] [glusterd.c:1422:init]
0-management: Maximum allowed open file descriptors set to 65536
[2019-04-23 10:36:44.327354] I [MSGID: 106479] [glusterd.c:1478:init]
0-management: Using /var/lib/glusterd as working directory
[2019-04-23 10:36:44.327363] I [MSGID: 106479] [glusterd.c:1484:init]
0-management: Using /var/run/gluster as pid file working directory
[2019-04-23 10:36:44.330126] I [socket.c:931:__socket_server_bind]
0-socket.management: process started listening on port (36203)
[2019-04-23 10:36:44.330258] E [rpc-transport.c:297:rpc_transport_load]
0-rpc-transport: /usr/lib64/glusterfs/6.1/rpc-transport/rdma.so: cannot open
shared object file: No such file or directory
[2019-04-23 10:36:44.330267] W [rpc-transport.c:301:rpc_transport_load]
0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid
or not found on this machine
[2019-04-23 10:36:44.330274] W [rpcsvc.c:1985:rpcsvc_create_listener]
0-rpc-service: cannot create listener, initing the transport failed
[2019-04-23 10:36:44.330281] E [MSGID: 106244] [glusterd.c:1785:init]
0-management: creation of 1 listeners failed, continuing with succeeded
transport
[2019-04-23 10:36:44.331976] I [socket.c:902:__socket_server_bind]
0-socket.management: closing (AF_UNIX) reuse check socket 13
[2019-04-23 10:36:46.805843] I [MSGID: 106513]
[glusterd-store.c:2394:glusterd_restore_op_version] 0-glusterd: retrieved
op-version: 50000
[2019-04-23 10:36:46.878878] I [MSGID: 106544]
[glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID:
5104ed01-f959-4a82-bbd6-17d4dd177ec2
[2019-04-23 10:36:46.881463] E [mem-pool.c:351:__gf_free]
(-->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x49190) [0x7fb0ecb64190]
-->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x48f72) [0x7fb0ecb63f
72] -->/usr/lib64/libglusterfs.so.0(__gf_free+0x21d) [0x7fb0f25091dd] ) 0-:
Assertion failed: mem_acct->rec[header->type].size >= header->size
[2019-04-23 10:36:46.908134] I [MSGID: 106498]
[glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0
[2019-04-23 10:36:46.910052] I [MSGID: 106498]
[glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0
[2019-04-23 10:36:46.910135] W [MSGID: 106061]
[glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd:
Failed to get tcp-user-timeout
[2019-04-23 10:36:46.910167] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2019-04-23 10:36:46.911425] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
1: volume management
2: type mgmt/glusterd
3: option rpc-auth.auth-glusterfs on
4: option rpc-auth.auth-unix on
5: option rpc-auth.auth-null on
6: option rpc-auth-allow-insecure on
7: option transport.listen-backlog 1024
8: option event-threads 1
9: option ping-timeout 0
10: option transport.socket.read-fail-log off
11: option transport.socket.keepalive-interval 2
12: option transport.socket.keepalive-time 10
13: option transport-type rdma
14: option working-directory /var/lib/glusterd
15: end-volume
16:
+------------------------------------------------------------------------------+
[2019-04-23 10:36:46.911405] W [MSGID: 106061]
[glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd:
Failed to get tcp-user-timeout
[2019-04-23 10:36:46.914845] I [MSGID: 101190]
[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 0
[2019-04-23 10:36:47.265981] I [MSGID: 106493]
[glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3, host: 10.10.0.25, port: 0
[2019-04-23 10:36:47.271481] I [glusterd-utils.c:6312:glusterd_brick_start]
0-management: starting a fresh brick process for brick /local.mnt/glfs/brick
[2019-04-23 10:36:47.273759] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2019-04-23 10:36:47.336220] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-nfs: setting frame-timeout to 600
[2019-04-23 10:36:47.336328] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped
[2019-04-23 10:36:47.336383] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is
stopped
[2019-04-23 10:36:47.336735] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-glustershd: setting frame-timeout to 600
[2019-04-23 10:36:47.337733] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already
stopped
[2019-04-23 10:36:47.337755] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is
stopped
[2019-04-23 10:36:47.337804] I [MSGID: 106567]
[glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd
service
[2019-04-23 10:36:48.340193] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-quotad: setting frame-timeout to 600
[2019-04-23 10:36:48.340446] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already
stopped
[2019-04-23 10:36:48.340482] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is
stopped
[2019-04-23 10:36:48.340525] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-bitd: setting frame-timeout to 600
[2019-04-23 10:36:48.340662] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped
[2019-04-23 10:36:48.340686] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is
stopped
[2019-04-23 10:36:48.340721] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-scrub: setting frame-timeout to 600
[2019-04-23 10:36:48.340851] I [MSGID: 106131]
[glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already
stopped
[2019-04-23 10:36:48.340865] I [MSGID: 106568]
[glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is
stopped
[2019-04-23 10:36:48.340913] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-snapd: setting frame-timeout to 600
[2019-04-23 10:36:48.341005] I [rpc-clnt.c:1005:rpc_clnt_connection_init]
0-gfproxyd: setting frame-timeout to 600
[2019-04-23 10:36:48.342056] I [MSGID: 106493]
[glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received
ACC from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3
[2019-04-23 10:36:48.342125] I [MSGID: 106493]
[glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4, host: 10.10.0.208, port: 0
[2019-04-23 10:36:48.378690] I [MSGID: 106493]
[glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received
ACC from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4
[2019-04-23 10:37:15.410095] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory" repeated 2 times between [2019-04-23
10:37:15.410095] and [2019-04-23 10:37:15.410162]
[2019-04-23 10:37:15.417228] E [MSGID: 101097]
[xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing:
/usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator:
dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so:
undefined symbol: xlator_api" repeated 7 times between [2019-04-23
10:37:15.417228] and [2019-04-23 10:37:15.417319]
[2019-04-23 10:37:15.449809] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file:
No such file or directory
[2019-04-23 12:23:14.757482] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory
[2019-04-23 12:23:14.765810] E [MSGID: 101097]
[xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing:
/usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api
[2019-04-23 12:23:14.801394] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file:
No such file or directory
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory" repeated 2 times between [2019-04-23
12:23:14.757482] and [2019-04-23 12:23:14.757578]
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator:
dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so:
undefined symbol: xlator_api" repeated 7 times between [2019-04-23
12:23:14.765810] and [2019-04-23 12:23:14.765864]
[2019-04-23 12:29:45.957524] I [MSGID: 106488]
[glusterd-handler.c:1559:__glusterd_handle_cli_get_volume] 0-management:
Received get vol req
[2019-04-23 12:30:06.917403] I [MSGID: 106488]
[glusterd-handler.c:1559:__glusterd_handle_cli_get_volume] 0-management:
Received get vol req
[2019-04-23 12:38:25.514866] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory
[2019-04-23 12:38:25.522473] E [MSGID: 101097]
[xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing:
/usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api
[2019-04-23 12:38:25.555952] W [MSGID: 101095]
[xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file:
No such file or directory
The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object
file: No such file or directory" repeated 2 times between [2019-04-23
12:38:25.514866] and [2019-04-23 12:38:25.514931]
The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator:
dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so:
undefined symbol: xlator_api" repeated 7 times between [2019-04-23
12:38:25.522473] and [2019-04-23 12:38:25.522545]
[2019-04-23 12:52:00.569988] W [glusterfsd.c:1570:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7504) [0x7fb0f1310504]
-->/usr/sbin/glusterd(glusterfs_sigwaiter+0xd5) [0x409f45]
-->/usr/sbin/glusterd(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum
(15), shutting down
Option Value
------ -----
cluster.lookup-unhashed on
cluster.lookup-optimize on
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
cluster.rebalance-stats off
cluster.subvols-per-directory (null)
cluster.readdir-optimize on
cluster.rsync-hash-regex (null)
cluster.extra-hash-regex (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid off
cluster.rebal-throttle normal
cluster.lock-migration off
cluster.force-migration off
cluster.local-volume-name (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log on
cluster.read-subvolume (null)
cluster.read-subvolume-index -1
cluster.read-hash-mode 1
cluster.background-self-heal-count 8
cluster.metadata-self-heal on
cluster.data-self-heal on
cluster.entry-self-heal on
cluster.self-heal-daemon enable
cluster.heal-timeout 600
cluster.self-heal-window-size 1
cluster.data-change-log on
cluster.metadata-change-log on
cluster.data-self-heal-algorithm (null)
cluster.eager-lock on
disperse.eager-lock on
disperse.other-eager-lock on
disperse.eager-lock-timeout 1
disperse.other-eager-lock-timeout 1
cluster.quorum-type auto
cluster.quorum-count (null)
cluster.choose-local true
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs 1
cluster.ensure-durability on
cluster.consistent-metadata no
cluster.heal-wait-queue-length 128
cluster.favorite-child-policy none
cluster.full-lock yes
cluster.stripe-block-size 128KB
cluster.stripe-coalesce true
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off
diagnostics.count-fop-hits off
diagnostics.brick-log-level CRITICAL
diagnostics.client-log-level CRITICAL
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-level CRITICAL
diagnostics.brick-logger (null)
diagnostics.client-logger (null)
diagnostics.brick-log-format (null)
diagnostics.client-log-format (null)
diagnostics.brick-log-buf-size 5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout 120
diagnostics.stats-dump-interval 0
diagnostics.fop-sample-interval 0
diagnostics.stats-dump-format json
diagnostics.fop-sample-buf-size 65535
diagnostics.stats-dnscache-ttl-sec 86400
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout 1
performance.cache-priority
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
performance.enable-least-priority on
performance.iot-watchdog-secs (null)
performance.iot-cleanup-disconnected-reqsoff
performance.iot-pass-through false
performance.io-cache-pass-through false
performance.cache-size 128MB
performance.qr-cache-timeout 1
performance.cache-invalidation on
performance.ctime-invalidation false
performance.flush-behind on
performance.nfs.flush-behind on
performance.write-behind-window-size 1MB
performance.resync-failed-syncs-after-fsyncoff
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct off
performance.nfs.strict-o-direct off
performance.strict-write-ordering off
performance.nfs.strict-write-ordering off
performance.write-behind-trickling-writeson
performance.aggregate-size 128KB
performance.nfs.write-behind-trickling-writeson
performance.lazy-open yes
performance.read-after-open yes
performance.open-behind-pass-through false
performance.read-ahead-page-count 4
performance.read-ahead-pass-through false
performance.readdir-ahead-pass-through false
performance.md-cache-pass-through false
performance.md-cache-timeout 600
performance.cache-swift-metadata true
performance.cache-samba-metadata false
performance.cache-capability-xattrs true
performance.cache-ima-xattrs true
performance.md-cache-statfs off
performance.xattr-cache-list
performance.nl-cache-pass-through false
features.encryption off
encryption.master-key (null)
encryption.data-key-size 256
encryption.block-size 4096
network.frame-timeout 1800
network.ping-timeout 42
network.tcp-window-size (null)
network.remote-dio disable
client.event-threads 2
client.tcp-user-timeout 0
client.keepalive-time 20
client.keepalive-interval 2
client.keepalive-count 9
network.tcp-window-size (null)
network.inode-lru-limit 200000
auth.allow *
auth.reject (null)
transport.keepalive 1
server.allow-insecure on
server.root-squash off
server.anonuid 65534
server.anongid 65534
server.statedump-path /var/run/gluster
server.outstanding-rpc-limit 64
server.ssl (null)
auth.ssl-allow *
server.manage-gids off
server.dynamic-auth on
client.send-gids on
server.gid-timeout 300
server.own-thread (null)
server.event-threads 1
server.tcp-user-timeout 0
server.keepalive-time 20
server.keepalive-interval 2
server.keepalive-count 9
transport.listen-backlog 1024
ssl.own-cert (null)
ssl.private-key (null)
ssl.ca-list (null)
ssl.crl-path (null)
ssl.certificate-depth (null)
ssl.cipher-list (null)
ssl.dh-param (null)
ssl.ec-curve (null)
transport.address-family inet
performance.write-behind on
performance.read-ahead on
performance.readdir-ahead on
performance.io-cache on
performance.quick-read on
performance.open-behind on
performance.nl-cache off
performance.stat-prefetch on
performance.client-io-threads off
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true
performance.cache-invalidation on
features.uss off
features.snapshot-directory .snaps
features.show-snapshot-directory off
features.tag-namespaces off
network.compression off
network.compression.window-size -15
network.compression.mem-level 8
network.compression.min-size 0
network.compression.compression-level -1
network.compression.debug false
features.default-soft-limit 80%
features.soft-timeout 60
features.hard-timeout 5
features.alert-time 86400
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota off
features.inode-quota off
features.bitrot disable
debug.trace off
debug.log-history no
debug.log-file no
debug.exclude-ops (null)
debug.include-ops (null)
debug.error-gen off
debug.error-failure (null)
debug.error-number (null)
debug.random-failure off
debug.error-fops (null)
nfs.enable-ino32 no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16
nfs.port 2049
nfs.rpc-auth-unix on
nfs.rpc-auth-null on
nfs.rpc-auth-allow all
nfs.rpc-auth-reject none
nfs.ports-insecure off
nfs.trusted-sync off
nfs.trusted-write off
nfs.volume-access read-write
nfs.export-dir
nfs.disable on
nfs.nlm on
nfs.acl on
nfs.mount-udp off
nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab
nfs.rpc-statd /sbin/rpc.statd
nfs.server-aux-gids off
nfs.drc off
nfs.drc-size 0x20000
nfs.read-size (1 * 1048576ULL)
nfs.write-size (1 * 1048576ULL)
nfs.readdir-size (1 * 1048576ULL)
nfs.rdirplus on
nfs.event-threads 1
nfs.exports-auth-enable (null)
nfs.auth-refresh-interval-sec (null)
nfs.auth-cache-ttl-sec (null)
features.read-only off
features.worm off
features.worm-file-level off
features.worm-files-deletable on
features.default-retention-period 120
features.retention-mode relax
features.auto-commit-period 180
storage.linux-aio off
storage.batch-fsync-mode reverse-fsync
storage.batch-fsync-delay-usec 0
storage.owner-uid -1
storage.owner-gid -1
storage.node-uuid-pathinfo off
storage.health-check-interval 30
storage.build-pgfid off
storage.gfid2path on
storage.gfid2path-separator :
storage.reserve 1
storage.health-check-timeout 10
storage.fips-mode-rchecksum off
storage.force-create-mode 0000
storage.force-directory-mode 0000
storage.create-mask 0777
storage.create-directory-mask 0777
storage.max-hardlinks 100
storage.ctime off
storage.bd-aio off
config.gfproxyd off
cluster.server-quorum-type off
cluster.server-quorum-ratio 0
changelog.changelog off
changelog.changelog-dir {{ brick.path }}/.glusterfs/changelogs
changelog.encoding ascii
changelog.rollover-time 15
changelog.fsync-interval 5
changelog.changelog-barrier-timeout 120
changelog.capture-del-path off
features.barrier disable
features.barrier-timeout 120
features.trash off
features.trash-dir .trashcan
features.trash-eliminate-path (null)
features.trash-max-filesize 5MB
features.trash-internal-op off
cluster.enable-shared-storage disable
locks.trace off
locks.mandatory-locking off
cluster.disperse-self-heal-daemon enable
cluster.quorum-reads no
client.bind-insecure (null)
features.timeout 45
features.failover-hosts (null)
features.shard off
features.shard-block-size 64MB
features.shard-lru-limit 16384
features.shard-deletion-rate 100
features.scrub-throttle lazy
features.scrub-freq biweekly
features.scrub false
features.expiry-time 120
features.cache-invalidation on
features.cache-invalidation-timeout 600
features.leases off
features.lease-lock-recall-timeout 60
disperse.background-heals 8
disperse.heal-wait-qlength 128
cluster.heal-timeout 600
dht.force-readdirp on
disperse.read-policy gfid-hash
cluster.shd-max-threads 1
cluster.shd-wait-qlength 1024
cluster.locking-scheme full
cluster.granular-entry-heal no
features.locks-revocation-secs 0
features.locks-revocation-clear-all false
features.locks-revocation-max-blocked 0
features.locks-monkey-unlocking false
features.locks-notify-contention no
features.locks-notify-contention-delay 5
disperse.shd-max-threads 1
disperse.shd-wait-qlength 1024
disperse.cpu-extensions auto
disperse.self-heal-window-size 1
cluster.use-compound-fops off
performance.parallel-readdir off
performance.rda-request-size 131072
performance.rda-low-wmark 4096
performance.rda-high-wmark 128KB
performance.rda-cache-limit 10MB
performance.nl-cache-positive-entry false
performance.nl-cache-limit 10MB
performance.nl-cache-timeout 60
cluster.brick-multiplex off
cluster.max-bricks-per-process 0
disperse.optimistic-change-log on
disperse.stripe-cache 4
cluster.halo-enabled False
cluster.halo-shd-max-latency 99999
cluster.halo-nfsd-max-latency 5
cluster.halo-max-latency 5
cluster.halo-max-replicas 99999
cluster.halo-min-replicas 2
cluster.daemon-log-level INFO
debug.delay-gen off
delay-gen.delay-percentage 10%
delay-gen.delay-duration 100000
delay-gen.enable
disperse.parallel-writes on
features.sdfs on
features.cloudsync off
features.utime off
ctime.noatime on
feature.cloudsync-storetype (null)
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list