[Bugs] [Bug 1224119] New: Disperse volume: 1x(4+2) config doesn't sustain 2 brick failures
bugzilla at redhat.com
bugzilla at redhat.com
Fri May 22 08:49:43 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1224119
Bug ID: 1224119
Summary: Disperse volume: 1x(4+2) config doesn't sustain 2
brick failures
Product: Red Hat Gluster Storage
Version: 3.1
Component: glusterfs
Sub Component: disperse
Keywords: Triaged
Assignee: rhs-bugs at redhat.com
Reporter: byarlaga at redhat.com
QA Contact: byarlaga at redhat.com
CC: bugs at gluster.org, byarlaga at redhat.com,
gluster-bugs at redhat.com, jharriga at redhat.com,
pcuzner at redhat.com, pkarampu at redhat.com
Depends On: 1192971
Blocks: 1186580 (qe_tracker_everglades)
Group: redhat
+++ This bug was initially created as a clone of Bug #1192971 +++
Description of problem:
=======================
Input/Output errors on client when 2 bricks fail in a 1x(4+2) disperse volume.
I/O resumes once the bricks come online.
Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.7dev built on Feb 14 2015 01:05:51
How reproducible:
=================
100%
Volume options :
===============
[root at vertigo gluster]# gluster volume get testvol all
Option Value
------ -----
cluster.lookup-unhashed on
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
cluster.rebalance-stats off
cluster.subvols-per-directory (null)
cluster.readdir-optimize off
cluster.rsync-hash-regex (null)
cluster.extra-hash-regex (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid off
cluster.local-volume-name (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log on
cluster.read-subvolume (null)
cluster.read-subvolume-index -1
cluster.read-hash-mode 1
cluster.background-self-heal-count 16
cluster.metadata-self-heal on
cluster.data-self-heal on
cluster.entry-self-heal on
cluster.self-heal-daemon on
cluster.heal-timeout 600
cluster.self-heal-window-size 1
cluster.data-change-log on
cluster.metadata-change-log on
cluster.data-self-heal-algorithm (null)
cluster.eager-lock on
cluster.quorum-type none
cluster.quorum-count (null)
cluster.choose-local true
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs 1
cluster.ensure-durability on
cluster.stripe-block-size 128KB
cluster.stripe-coalesce true
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off
diagnostics.count-fop-hits off
diagnostics.brick-log-level INFO
diagnostics.client-log-level INFO
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-level CRITICAL
diagnostics.brick-logger (null)
diagnostics.client-logger (null)
diagnostics.brick-log-format (null)
diagnostics.client-log-format (null)
diagnostics.brick-log-buf-size 5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout 120
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout 1
performance.cache-priority
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
performance.enable-least-priority on
performance.least-rate-limit 0
performance.cache-size 128MB
performance.flush-behind on
performance.nfs.flush-behind on
performance.write-behind-window-size 1MB
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct off
performance.nfs.strict-o-direct off
performance.strict-write-ordering off
performance.nfs.strict-write-ordering off
performance.lazy-open yes
performance.read-after-open no
performance.read-ahead-page-count 4
performance.md-cache-timeout 1
features.encryption off
encryption.master-key (null)
encryption.data-key-size 256
encryption.block-size 4096
network.frame-timeout 1800
network.ping-timeout 42
network.tcp-window-size (null)
features.lock-heal off
features.grace-timeout 10
network.remote-dio disable
client.event-threads 2
network.tcp-window-size (null)
network.inode-lru-limit 16384
auth.allow *
auth.reject (null)
transport.keepalive (null)
server.allow-insecure (null)
server.root-squash off
server.anonuid 65534
server.anongid 65534
server.statedump-path /var/run/gluster
server.outstanding-rpc-limit 64
features.lock-heal off
features.grace-timeout (null)
server.ssl (null)
auth.ssl-allow *
server.manage-gids off
client.send-gids on
server.gid-timeout 2
server.own-thread (null)
server.event-threads 2
performance.write-behind on
performance.read-ahead on
performance.readdir-ahead off
performance.io-cache on
performance.quick-read on
performance.open-behind on
performance.stat-prefetch on
performance.client-io-threads off
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true
features.file-snapshot off
features.uss off
features.snapshot-directory .snaps
features.show-snapshot-directory off
network.compression off
network.compression.window-size -15
network.compression.mem-level 8
network.compression.min-size 0
network.compression.compression-level -1
network.compression.debug false
features.limit-usage (null)
features.quota-timeout 0
features.default-soft-limit 80%
features.soft-timeout 60
features.hard-timeout 5
features.alert-time 86400
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota on
debug.trace off
debug.log-history no
debug.log-file no
debug.exclude-ops (null)
debug.include-ops (null)
debug.error-gen off
debug.error-failure (null)
debug.error-number (null)
debug.random-failure off
debug.error-fops (null)
nfs.enable-ino32 no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16
nfs.port 2049
nfs.rpc-auth-unix on
nfs.rpc-auth-null on
nfs.rpc-auth-allow all
nfs.rpc-auth-reject none
nfs.ports-insecure off
nfs.trusted-sync off
nfs.trusted-write off
nfs.volume-access read-write
nfs.export-dir
nfs.disable false
nfs.nlm on
nfs.acl on
nfs.mount-udp off
nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab
nfs.rpc-statd /sbin/rpc.statd
nfs.server-aux-gids off
nfs.drc off
nfs.drc-size 0x20000
nfs.read-size (1 * 1048576ULL)
nfs.write-size (1 * 1048576ULL)
nfs.readdir-size (1 * 1048576ULL)
features.read-only off
features.worm off
storage.linux-aio off
storage.batch-fsync-mode reverse-fsync
storage.batch-fsync-delay-usec 0
storage.owner-uid -1
storage.owner-gid -1
storage.node-uuid-pathinfo off
storage.health-check-interval 30
storage.build-pgfid off
storage.bd-aio off
cluster.server-quorum-type off
cluster.server-quorum-ratio 0
changelog.changelog off
changelog.changelog-dir (null)
changelog.encoding ascii
changelog.rollover-time 15
changelog.fsync-interval 5
changelog.changelog-barrier-timeout 120
features.barrier disable
features.barrier-timeout 120
locks.trace disable
cluster.disperse-self-heal-daemon enable
[root at vertigo gluster]#
Gluster volume info:
=====================
[root at vertigo gluster]# gluster v info
Volume Name: testvol
Type: Disperse
Volume ID: 21ed8908-3458-4834-b93d-161b694c3e37
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: ninja:/rhs/brick1/b1
Brick2: vertigo:/rhs/brick1/b1
Brick3: ninja:/rhs/brick2/b2
Brick4: vertigo:/rhs/brick2/b2
Brick5: ninja:/rhs/brick3/b3
Brick6: vertigo:/rhs/brick3/b3
Options Reconfigured:
client.event-threads: 2
server.event-threads: 2
features.barrier: disable
cluster.disperse-self-heal-daemon: enable
features.quota: on
[root at vertigo gluster]#
Gluster volume status:
======================
[root at vertigo gluster]# gluster v status
Status of volume: testvol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/b1 49152 Y 19369
Brick vertigo:/rhs/brick1/b1 49152 Y 30191
Brick ninja:/rhs/brick2/b2 49153 Y 18934
Brick vertigo:/rhs/brick2/b2 49153 Y 28690
Brick ninja:/rhs/brick3/b3 49154 Y 17499
Brick vertigo:/rhs/brick3/b3 49158 Y 28705
NFS Server on localhost 2049 Y 30205
Quota Daemon on localhost N/A Y 30222
NFS Server on 10.70.34.68 2049 Y 19383
Quota Daemon on 10.70.34.68 N/A Y 19400
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks
[root at vertigo gluster]#
[root at vertigo gluster]#
Steps to Reproduce:
1. Create a 1x(4+2) disperse volume
2. untar a linux tarball and simulate 2 brick failure by killing brick
processes.
Actual results:
===============
Input/Output errors on client mount
Expected results:
================
No errors should be seen
Additional info:
================
Attaching volume statedump files.
--- Additional comment from Pranith Kumar K on 2015-03-09 06:06:33 EDT ---
Tried re-creating the issue with the steps mentioned. Not able to re-create the
issue. I must be missing some important step. Tried re-creating the bug with
out http://review.gluster.com/9717 which fixed the bugs
https://bugzilla.redhat.com/show_bug.cgi?id=1191919
https://bugzilla.redhat.com/show_bug.cgi?id=1188145
Even with that no luck. For now adding need-info on you to figure out if we
missed something. Please give mount/brick logs if you encounter it this time.
--- Additional comment from Bhaskarakiran on 2015-05-06 02:07:09 EDT ---
This is still reproducible on the latest nightly 3.7 build.
--- Additional comment from Paul Cuzner on 2015-05-06 02:34:47 EDT ---
+1 - I also see this issue, in my local lab.
--- Additional comment from Paul Cuzner on 2015-05-06 19:01:56 EDT ---
Here's an overview of the test scenario I do
1. 6 vm's running under kvm.
2. Each vm is centos 7
3. glusterfs 3.7 nightly rpms
4. mount the volume to a client (also running 3.7)
5. run a write workload (small 5MB files, every 2 seconds)
6. kill a node (virsh destroy <ec node>)
7. observe impact to the client workload
In this latest build (3.7.0beta1-0.14.git09bbd5) even killing one node results
in the client writer process hanging.
Will attach further output for additional context.
--- Additional comment from Paul Cuzner on 2015-05-06 19:04:38 EDT ---
--- Additional comment from Paul Cuzner on 2015-05-18 17:55:58 EDT ---
Quick update
I updated glusterfs to glusterfs-server-3.7.0beta2-0.2.gitc1cd4fa.el7 on nodes
and client, and reran the test scenario
Result - beta2 is not hanging the client anymore when the nodes disapper -
which is great.
But...
self heal is not clearing, and is showing errors (attached)
Also the workload generator was running throughout the nodes being killed and
restarted saw I/O errors
5242880 bytes (5.2 MB) copied, 0.408578 s, 12.8 MB/s
File created, pausing for 2 seconds
Creating file number 226
dd: failed to open ‘/mnt/glusterfs/test-files/test_file_226’: Input/output
error
File created, pausing for 2 seconds
Creating file number 227
dd: failed to open ‘/mnt/glusterfs/test-files/test_file_227’: Input/output
error
File created, pausing for 2 seconds
Creating file number 228
dd: failed to open ‘/mnt/glusterfs/test-files/test_file_228’: Input/output
error
File created, pausing for 2 seconds
Creating file number 229
80+0 records in
80+0 records out
5242880 bytes (5.2 MB) copied, 0.868086 s, 6.0 MB/s
File created, pausing for 2 seconds
Creating file number 230
80+0 records in
80+0 records out
5242880 bytes (5.2 MB) copied, 0.368102 s, 14.2 MB/s
File created, pausing for 2 seconds
Also, I tried a full self heal manually
[root at gfs-ec2 glusterfs]# gluster vol heal ec_volume full
Commit failed on ebc870ff-6149-4f2d-bd83-ece1f82b839d. Please check log file
for details.
--- Additional comment from Paul Cuzner on 2015-05-18 17:56:38 EDT ---
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1192971
[Bug 1192971] Disperse volume: 1x(4+2) config doesn't sustain 2 brick
failures
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=jj6KAm5fks&a=cc_unsubscribe
More information about the Bugs
mailing list